Oct 31 00:33:59.214536 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:33:59.214567 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:33:59.214584 kernel: BIOS-provided physical RAM map: Oct 31 00:33:59.214593 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 00:33:59.214601 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 00:33:59.214610 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 00:33:59.214621 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 31 00:33:59.214630 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 31 00:33:59.214639 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 00:33:59.214652 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 00:33:59.214661 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 00:33:59.214670 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 00:33:59.214683 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 31 00:33:59.214693 kernel: NX (Execute Disable) protection: active Oct 31 00:33:59.214705 kernel: APIC: Static calls initialized Oct 31 00:33:59.214723 kernel: SMBIOS 2.8 present. Oct 31 00:33:59.214733 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 31 00:33:59.214743 kernel: Hypervisor detected: KVM Oct 31 00:33:59.214753 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 00:33:59.214763 kernel: kvm-clock: using sched offset of 3011031669 cycles Oct 31 00:33:59.214773 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 00:33:59.214783 kernel: tsc: Detected 2794.748 MHz processor Oct 31 00:33:59.214794 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:33:59.214804 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:33:59.214814 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 31 00:33:59.214829 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 31 00:33:59.214840 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:33:59.214850 kernel: Using GB pages for direct mapping Oct 31 00:33:59.214860 kernel: ACPI: Early table checksum verification disabled Oct 31 00:33:59.214869 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 31 00:33:59.214880 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214890 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214900 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214915 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 31 00:33:59.214925 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214935 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214946 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214955 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:33:59.214965 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 31 00:33:59.214976 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 31 00:33:59.214992 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 31 00:33:59.215006 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 31 00:33:59.215016 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 31 00:33:59.215027 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 31 00:33:59.215037 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 31 00:33:59.215048 kernel: No NUMA configuration found Oct 31 00:33:59.215059 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 31 00:33:59.215073 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 31 00:33:59.215084 kernel: Zone ranges: Oct 31 00:33:59.215094 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:33:59.215105 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 31 00:33:59.215115 kernel: Normal empty Oct 31 00:33:59.215125 kernel: Movable zone start for each node Oct 31 00:33:59.215147 kernel: Early memory node ranges Oct 31 00:33:59.215157 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 00:33:59.215168 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 31 00:33:59.215179 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 31 00:33:59.215194 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:33:59.215208 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 00:33:59.215219 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 31 00:33:59.215229 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 00:33:59.215239 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 00:33:59.215252 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:33:59.215264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 00:33:59.215275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 00:33:59.215287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 00:33:59.215304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 00:33:59.215315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 00:33:59.215325 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:33:59.215336 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 00:33:59.215346 kernel: TSC deadline timer available Oct 31 00:33:59.215357 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 00:33:59.215368 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 00:33:59.215378 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 00:33:59.215392 kernel: kvm-guest: setup PV sched yield Oct 31 00:33:59.215407 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 00:33:59.215417 kernel: Booting paravirtualized kernel on KVM Oct 31 00:33:59.215427 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:33:59.215437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 31 00:33:59.215485 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Oct 31 00:33:59.215510 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Oct 31 00:33:59.215520 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 00:33:59.215530 kernel: kvm-guest: PV spinlocks enabled Oct 31 00:33:59.215539 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 00:33:59.215556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:33:59.215567 kernel: random: crng init done Oct 31 00:33:59.215576 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:33:59.215586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:33:59.215596 kernel: Fallback order for Node 0: 0 Oct 31 00:33:59.215603 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 31 00:33:59.215611 kernel: Policy zone: DMA32 Oct 31 00:33:59.215618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:33:59.215629 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Oct 31 00:33:59.215637 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:33:59.215644 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:33:59.215654 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:33:59.215663 kernel: Dynamic Preempt: voluntary Oct 31 00:33:59.215673 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:33:59.215689 kernel: rcu: RCU event tracing is enabled. Oct 31 00:33:59.215700 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:33:59.215709 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:33:59.215723 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:33:59.215733 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:33:59.215744 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:33:59.215754 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:33:59.215770 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 00:33:59.215781 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 00:33:59.215792 kernel: Console: colour VGA+ 80x25 Oct 31 00:33:59.215801 kernel: printk: console [ttyS0] enabled Oct 31 00:33:59.215811 kernel: ACPI: Core revision 20230628 Oct 31 00:33:59.215821 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 00:33:59.215835 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:33:59.215844 kernel: x2apic enabled Oct 31 00:33:59.215854 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:33:59.215864 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 31 00:33:59.215873 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 31 00:33:59.215883 kernel: kvm-guest: setup PV IPIs Oct 31 00:33:59.215892 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:33:59.215916 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 00:33:59.215926 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 00:33:59.215936 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 00:33:59.215946 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 00:33:59.215959 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 00:33:59.215970 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:33:59.215980 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 00:33:59.215990 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 00:33:59.216000 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 00:33:59.216014 kernel: active return thunk: retbleed_return_thunk Oct 31 00:33:59.216024 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 00:33:59.216039 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:33:59.216049 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:33:59.216059 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 31 00:33:59.216070 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 31 00:33:59.216080 kernel: active return thunk: srso_return_thunk Oct 31 00:33:59.216091 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 31 00:33:59.216104 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:33:59.216115 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:33:59.216125 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:33:59.216147 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:33:59.216158 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:33:59.216169 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:33:59.216181 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:33:59.216191 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:33:59.216201 kernel: landlock: Up and running. Oct 31 00:33:59.216215 kernel: SELinux: Initializing. Oct 31 00:33:59.216225 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:33:59.216235 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:33:59.216246 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 00:33:59.216256 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:33:59.216266 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:33:59.216276 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:33:59.216286 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 00:33:59.216300 kernel: ... version: 0 Oct 31 00:33:59.216315 kernel: ... bit width: 48 Oct 31 00:33:59.216325 kernel: ... generic registers: 6 Oct 31 00:33:59.216335 kernel: ... value mask: 0000ffffffffffff Oct 31 00:33:59.216345 kernel: ... max period: 00007fffffffffff Oct 31 00:33:59.216355 kernel: ... fixed-purpose events: 0 Oct 31 00:33:59.216366 kernel: ... event mask: 000000000000003f Oct 31 00:33:59.216376 kernel: signal: max sigframe size: 1776 Oct 31 00:33:59.216386 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:33:59.216397 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:33:59.216413 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:33:59.216423 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:33:59.216433 kernel: .... node #0, CPUs: #1 #2 #3 Oct 31 00:33:59.216444 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:33:59.216476 kernel: smpboot: Max logical packages: 1 Oct 31 00:33:59.216487 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 00:33:59.216497 kernel: devtmpfs: initialized Oct 31 00:33:59.216508 kernel: x86/mm: Memory block size: 128MB Oct 31 00:33:59.216518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:33:59.216529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:33:59.216545 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:33:59.216555 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:33:59.216566 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:33:59.216576 kernel: audit: type=2000 audit(1761870837.931:1): state=initialized audit_enabled=0 res=1 Oct 31 00:33:59.216587 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:33:59.216597 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:33:59.216608 kernel: cpuidle: using governor menu Oct 31 00:33:59.216618 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:33:59.216629 kernel: dca service started, version 1.12.1 Oct 31 00:33:59.216643 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 00:33:59.216653 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 00:33:59.216663 kernel: PCI: Using configuration type 1 for base access Oct 31 00:33:59.216674 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:33:59.216685 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:33:59.216695 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:33:59.216706 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:33:59.216716 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:33:59.216730 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:33:59.216741 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:33:59.216751 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:33:59.216762 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:33:59.216772 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:33:59.216782 kernel: ACPI: Interpreter enabled Oct 31 00:33:59.216792 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 00:33:59.216802 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:33:59.216813 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:33:59.216823 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:33:59.216838 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 00:33:59.216848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:33:59.217130 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:33:59.217328 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 00:33:59.217545 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 00:33:59.217563 kernel: PCI host bridge to bus 0000:00 Oct 31 00:33:59.217756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:33:59.217917 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 00:33:59.218071 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:33:59.218253 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 00:33:59.218411 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:33:59.218605 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 31 00:33:59.218759 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:33:59.218985 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 00:33:59.219192 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 00:33:59.219361 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 31 00:33:59.219565 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 31 00:33:59.219748 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 31 00:33:59.219914 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:33:59.220101 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:33:59.220302 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 00:33:59.220499 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 31 00:33:59.220668 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 31 00:33:59.220906 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 00:33:59.221082 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 00:33:59.221275 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 31 00:33:59.221606 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 31 00:33:59.221826 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 00:33:59.221994 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 31 00:33:59.222183 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 31 00:33:59.222365 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 31 00:33:59.222576 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 31 00:33:59.222771 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 00:33:59.223028 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 00:33:59.223252 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 00:33:59.223422 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 31 00:33:59.223595 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 31 00:33:59.223764 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 00:33:59.223896 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 00:33:59.223908 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 00:33:59.223922 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 00:33:59.223930 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:33:59.223937 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 00:33:59.223945 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 00:33:59.223953 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 00:33:59.223960 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 00:33:59.223968 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 00:33:59.223976 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 00:33:59.223983 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 00:33:59.223994 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 00:33:59.224002 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 00:33:59.224009 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 00:33:59.224017 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 00:33:59.224025 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 00:33:59.224032 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 00:33:59.224040 kernel: iommu: Default domain type: Translated Oct 31 00:33:59.224048 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:33:59.224055 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:33:59.224066 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:33:59.224074 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 00:33:59.224081 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 31 00:33:59.224220 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 00:33:59.224347 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 00:33:59.224490 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:33:59.224502 kernel: vgaarb: loaded Oct 31 00:33:59.224510 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 00:33:59.224518 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 00:33:59.224530 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 00:33:59.224538 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:33:59.224546 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:33:59.224554 kernel: pnp: PnP ACPI init Oct 31 00:33:59.224784 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 00:33:59.224800 kernel: pnp: PnP ACPI: found 6 devices Oct 31 00:33:59.224808 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:33:59.224816 kernel: NET: Registered PF_INET protocol family Oct 31 00:33:59.224829 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:33:59.224837 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:33:59.224845 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:33:59.224853 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:33:59.224860 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 00:33:59.224868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:33:59.224876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:33:59.224884 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:33:59.224891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:33:59.224902 kernel: NET: Registered PF_XDP protocol family Oct 31 00:33:59.225026 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 00:33:59.225153 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 00:33:59.225270 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 00:33:59.225387 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 00:33:59.225533 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 00:33:59.225699 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 31 00:33:59.225713 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:33:59.225726 kernel: Initialise system trusted keyrings Oct 31 00:33:59.225734 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:33:59.225741 kernel: Key type asymmetric registered Oct 31 00:33:59.225749 kernel: Asymmetric key parser 'x509' registered Oct 31 00:33:59.225757 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:33:59.225764 kernel: io scheduler mq-deadline registered Oct 31 00:33:59.225772 kernel: io scheduler kyber registered Oct 31 00:33:59.225780 kernel: io scheduler bfq registered Oct 31 00:33:59.225787 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:33:59.225799 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 00:33:59.225806 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 00:33:59.225814 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 00:33:59.225822 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:33:59.225830 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:33:59.225838 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 00:33:59.225846 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:33:59.225853 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:33:59.226019 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 00:33:59.226036 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:33:59.226167 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 00:33:59.226288 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T00:33:58 UTC (1761870838) Oct 31 00:33:59.226408 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 00:33:59.226419 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 31 00:33:59.226427 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:33:59.226435 kernel: Segment Routing with IPv6 Oct 31 00:33:59.226442 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:33:59.226470 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:33:59.226477 kernel: Key type dns_resolver registered Oct 31 00:33:59.226485 kernel: IPI shorthand broadcast: enabled Oct 31 00:33:59.226493 kernel: sched_clock: Marking stable (1065002561, 273169854)->(1478019534, -139847119) Oct 31 00:33:59.226501 kernel: registered taskstats version 1 Oct 31 00:33:59.226508 kernel: Loading compiled-in X.509 certificates Oct 31 00:33:59.226516 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:33:59.226524 kernel: Key type .fscrypt registered Oct 31 00:33:59.226531 kernel: Key type fscrypt-provisioning registered Oct 31 00:33:59.226542 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:33:59.226550 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:33:59.226557 kernel: ima: No architecture policies found Oct 31 00:33:59.226565 kernel: clk: Disabling unused clocks Oct 31 00:33:59.226572 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:33:59.226580 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:33:59.226588 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:33:59.226595 kernel: Run /init as init process Oct 31 00:33:59.226603 kernel: with arguments: Oct 31 00:33:59.226613 kernel: /init Oct 31 00:33:59.226621 kernel: with environment: Oct 31 00:33:59.226628 kernel: HOME=/ Oct 31 00:33:59.226636 kernel: TERM=linux Oct 31 00:33:59.226646 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:33:59.226656 systemd[1]: Detected virtualization kvm. Oct 31 00:33:59.226668 systemd[1]: Detected architecture x86-64. Oct 31 00:33:59.226679 systemd[1]: Running in initrd. Oct 31 00:33:59.226694 systemd[1]: No hostname configured, using default hostname. Oct 31 00:33:59.226702 systemd[1]: Hostname set to . Oct 31 00:33:59.226711 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:33:59.226719 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:33:59.226727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:33:59.226736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:33:59.226745 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:33:59.226753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:33:59.226765 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:33:59.226788 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:33:59.226801 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:33:59.226809 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:33:59.226821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:33:59.226829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:33:59.226838 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:33:59.226846 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:33:59.226855 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:33:59.226863 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:33:59.226871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:33:59.226880 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:33:59.226889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:33:59.226900 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:33:59.226908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:33:59.226917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:33:59.226925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:33:59.226933 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:33:59.226942 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:33:59.226950 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:33:59.226959 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:33:59.226970 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:33:59.226981 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:33:59.226990 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:33:59.226998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:33:59.227007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:33:59.227015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:33:59.227024 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:33:59.227060 systemd-journald[193]: Collecting audit messages is disabled. Oct 31 00:33:59.227080 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:33:59.227092 systemd-journald[193]: Journal started Oct 31 00:33:59.227110 systemd-journald[193]: Runtime Journal (/run/log/journal/167989bc901c4616a989cec2fe941816) is 6.0M, max 48.4M, 42.3M free. Oct 31 00:33:59.223508 systemd-modules-load[194]: Inserted module 'overlay' Oct 31 00:33:59.296638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:33:59.296675 kernel: Bridge firewalling registered Oct 31 00:33:59.254029 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 31 00:33:59.305868 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:33:59.306570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:33:59.310938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:33:59.314991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:33:59.330667 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:33:59.335812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:33:59.340028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:33:59.345748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:33:59.352777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:33:59.355489 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:33:59.367559 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:33:59.371738 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:33:59.386386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:33:59.391531 dracut-cmdline[229]: dracut-dracut-053 Oct 31 00:33:59.391531 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:33:59.397658 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:33:59.442145 systemd-resolved[242]: Positive Trust Anchors: Oct 31 00:33:59.442169 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:33:59.442210 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:33:59.445136 systemd-resolved[242]: Defaulting to hostname 'linux'. Oct 31 00:33:59.446623 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:33:59.446897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:33:59.871543 kernel: SCSI subsystem initialized Oct 31 00:33:59.884491 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:33:59.896494 kernel: iscsi: registered transport (tcp) Oct 31 00:33:59.928987 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:33:59.929034 kernel: QLogic iSCSI HBA Driver Oct 31 00:34:00.000148 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:34:00.017604 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:34:00.056399 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:34:00.056480 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:34:00.058496 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:34:00.106500 kernel: raid6: avx2x4 gen() 19562 MB/s Oct 31 00:34:00.125479 kernel: raid6: avx2x2 gen() 28193 MB/s Oct 31 00:34:00.144140 kernel: raid6: avx2x1 gen() 25716 MB/s Oct 31 00:34:00.144191 kernel: raid6: using algorithm avx2x2 gen() 28193 MB/s Oct 31 00:34:00.162254 kernel: raid6: .... xor() 19039 MB/s, rmw enabled Oct 31 00:34:00.162306 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:34:00.209504 kernel: xor: automatically using best checksumming function avx Oct 31 00:34:00.370508 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:34:00.385650 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:34:00.429604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:34:00.443804 systemd-udevd[416]: Using default interface naming scheme 'v255'. Oct 31 00:34:00.448596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:34:00.451331 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:34:00.473647 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Oct 31 00:34:00.519204 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:34:00.540676 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:34:00.620211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:34:00.643713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:34:00.663535 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:34:00.665052 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:34:00.671050 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 31 00:34:00.672700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:34:00.678908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:34:00.713015 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:34:00.701604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:34:00.718070 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:34:00.718126 kernel: GPT:9289727 != 19775487 Oct 31 00:34:00.718153 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:34:00.723984 kernel: GPT:9289727 != 19775487 Oct 31 00:34:00.724027 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:34:00.724054 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:34:00.726487 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:34:00.733439 kernel: AES CTR mode by8 optimization enabled Oct 31 00:34:00.735479 kernel: libata version 3.00 loaded. Oct 31 00:34:00.737727 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:34:00.758660 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 00:34:00.758878 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Oct 31 00:34:00.758899 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 00:34:00.762288 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:34:00.770932 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 00:34:00.771144 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 00:34:00.771301 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (477) Oct 31 00:34:00.774931 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 00:34:00.777485 kernel: scsi host0: ahci Oct 31 00:34:00.779486 kernel: scsi host1: ahci Oct 31 00:34:00.779733 kernel: scsi host2: ahci Oct 31 00:34:00.781469 kernel: scsi host3: ahci Oct 31 00:34:00.783485 kernel: scsi host4: ahci Oct 31 00:34:00.786110 kernel: scsi host5: ahci Oct 31 00:34:00.786349 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 31 00:34:00.786366 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 31 00:34:00.787995 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 31 00:34:00.789812 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 00:34:00.796991 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 31 00:34:00.797010 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 31 00:34:00.797021 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 31 00:34:00.808385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:34:00.815970 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 00:34:00.820135 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 00:34:00.836597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:34:00.861798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:34:00.861867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:34:00.867787 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:34:00.871817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:34:00.879386 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:34:00.895051 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:34:00.905594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:34:01.000800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:34:01.014598 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:34:01.060252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:34:01.106781 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 00:34:01.106858 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 00:34:01.106869 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 00:34:01.107483 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 00:34:01.108482 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 00:34:01.109482 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 00:34:01.111029 kernel: ata3.00: applying bridge limits Oct 31 00:34:01.112046 kernel: ata3.00: configured for UDMA/100 Oct 31 00:34:01.112469 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 00:34:01.116919 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 00:34:01.125935 disk-uuid[562]: Primary Header is updated. Oct 31 00:34:01.125935 disk-uuid[562]: Secondary Entries is updated. Oct 31 00:34:01.125935 disk-uuid[562]: Secondary Header is updated. Oct 31 00:34:01.131443 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:34:01.136504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:34:01.179060 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 00:34:01.179581 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:34:01.201517 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:34:02.146392 disk-uuid[577]: The operation has completed successfully. Oct 31 00:34:02.148580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:34:02.174157 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:34:02.174288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:34:02.204605 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:34:02.212562 sh[593]: Success Oct 31 00:34:02.246472 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 00:34:02.284421 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:34:02.303375 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:34:02.308571 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:34:02.329763 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:34:02.329831 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:34:02.329849 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:34:02.333129 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:34:02.333153 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:34:02.339954 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:34:02.340272 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 00:34:02.352613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:34:02.355801 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:34:02.383845 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:34:02.383886 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:34:02.383905 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:34:02.388484 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:34:02.398024 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:34:02.401439 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:34:02.489607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:34:02.503609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:34:02.527128 systemd-networkd[771]: lo: Link UP Oct 31 00:34:02.527141 systemd-networkd[771]: lo: Gained carrier Oct 31 00:34:02.528895 systemd-networkd[771]: Enumeration completed Oct 31 00:34:02.528996 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:34:02.529358 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:34:02.529362 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:34:02.532598 systemd-networkd[771]: eth0: Link UP Oct 31 00:34:02.532603 systemd-networkd[771]: eth0: Gained carrier Oct 31 00:34:02.532611 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:34:02.557552 systemd[1]: Reached target network.target - Network. Oct 31 00:34:02.563624 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:34:02.580466 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:34:02.594736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:34:02.758140 ignition[776]: Ignition 2.19.0 Oct 31 00:34:02.758151 ignition[776]: Stage: fetch-offline Oct 31 00:34:02.758211 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:34:02.758223 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:34:02.758343 ignition[776]: parsed url from cmdline: "" Oct 31 00:34:02.758348 ignition[776]: no config URL provided Oct 31 00:34:02.758354 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:34:02.758365 ignition[776]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:34:02.758402 ignition[776]: op(1): [started] loading QEMU firmware config module Oct 31 00:34:02.758409 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:34:02.767591 ignition[776]: op(1): [finished] loading QEMU firmware config module Oct 31 00:34:02.861373 ignition[776]: parsing config with SHA512: c453c1cd57fe2e4f65a8c934319cf0b89f0ac81249cfae3661a82bffa1ed120c56d9097af0cfdab53f49c94e8bdd41efa16911d1f2fd795b88e41f862a2f102b Oct 31 00:34:02.866660 unknown[776]: fetched base config from "system" Oct 31 00:34:02.866676 unknown[776]: fetched user config from "qemu" Oct 31 00:34:02.882580 ignition[776]: fetch-offline: fetch-offline passed Oct 31 00:34:02.882733 ignition[776]: Ignition finished successfully Oct 31 00:34:02.885829 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:34:02.890666 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:34:02.909760 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:34:02.950384 ignition[785]: Ignition 2.19.0 Oct 31 00:34:02.950398 ignition[785]: Stage: kargs Oct 31 00:34:02.950673 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:34:02.950691 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:34:02.951582 ignition[785]: kargs: kargs passed Oct 31 00:34:02.951638 ignition[785]: Ignition finished successfully Oct 31 00:34:02.970176 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:34:02.982620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:34:03.021262 ignition[793]: Ignition 2.19.0 Oct 31 00:34:03.021276 ignition[793]: Stage: disks Oct 31 00:34:03.021446 ignition[793]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:34:03.021474 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:34:03.059841 ignition[793]: disks: disks passed Oct 31 00:34:03.059941 ignition[793]: Ignition finished successfully Oct 31 00:34:03.064674 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:34:03.065060 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:34:03.070147 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:34:03.074084 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:34:03.077882 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:34:03.081297 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:34:03.106802 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:34:03.128823 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 31 00:34:03.137637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:34:03.151619 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:34:03.284484 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:34:03.284937 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:34:03.286972 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:34:03.310566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:34:03.313901 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:34:03.314233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:34:03.314275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:34:03.314298 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:34:03.326556 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:34:03.333000 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:34:03.346475 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Oct 31 00:34:03.346542 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:34:03.346558 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:34:03.349691 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:34:03.354160 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:34:03.355799 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:34:03.377865 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:34:03.382797 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:34:03.388405 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:34:03.393868 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:34:03.502577 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:34:03.512568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:34:03.516704 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:34:03.528125 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:34:03.530703 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:34:03.546353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:34:03.570141 ignition[926]: INFO : Ignition 2.19.0 Oct 31 00:34:03.570141 ignition[926]: INFO : Stage: mount Oct 31 00:34:03.573486 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:34:03.573486 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:34:03.573486 ignition[926]: INFO : mount: mount passed Oct 31 00:34:03.573486 ignition[926]: INFO : Ignition finished successfully Oct 31 00:34:03.574224 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:34:03.586492 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:34:03.593880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:34:03.622484 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Oct 31 00:34:03.625631 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:34:03.625660 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:34:03.625674 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:34:03.630475 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:34:03.632799 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:34:03.666978 ignition[957]: INFO : Ignition 2.19.0 Oct 31 00:34:03.666978 ignition[957]: INFO : Stage: files Oct 31 00:34:03.670567 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:34:03.670567 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:34:03.670567 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:34:03.670567 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:34:03.670567 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:34:03.682706 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:34:03.682706 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:34:03.682706 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:34:03.682706 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 00:34:03.682706 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 31 00:34:03.674185 unknown[957]: wrote ssh authorized keys file for user: core Oct 31 00:34:03.679608 systemd-networkd[771]: eth0: Gained IPv6LL Oct 31 00:34:03.740143 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 00:34:03.930227 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 00:34:03.930227 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:34:03.936893 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 31 00:34:04.242566 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 00:34:05.138479 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 00:34:05.138479 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 00:34:05.145140 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:34:05.190747 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:34:05.200000 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:34:05.202843 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:34:05.202843 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:34:05.202843 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:34:05.202843 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:34:05.202843 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:34:05.202843 ignition[957]: INFO : files: files passed Oct 31 00:34:05.202843 ignition[957]: INFO : Ignition finished successfully Oct 31 00:34:05.204110 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:34:05.218775 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:34:05.222266 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:34:05.225557 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:34:05.225690 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:34:05.237297 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 00:34:05.243922 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:34:05.249237 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:34:05.246710 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:34:05.256530 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:34:05.249975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:34:05.261710 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:34:05.296432 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:34:05.296608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:34:05.300657 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:34:05.304346 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:34:05.304577 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:34:05.305581 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:34:05.329690 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:34:05.342722 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:34:05.355771 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:34:05.358019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:34:05.362109 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:34:05.365820 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:34:05.366017 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:34:05.369893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:34:05.372676 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:34:05.376371 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:34:05.379924 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:34:05.383297 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:34:05.386886 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:34:05.390428 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:34:05.394273 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:34:05.397626 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:34:05.401324 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:34:05.404262 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:34:05.404484 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:34:05.408280 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:34:05.410676 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:34:05.414134 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:34:05.414303 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:34:05.417833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:34:05.418016 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:34:05.421918 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:34:05.422089 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:34:05.425182 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:34:05.428100 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:34:05.431541 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:34:05.434071 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:34:05.437236 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:34:05.440446 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:34:05.440611 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:34:05.443923 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:34:05.444058 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:34:05.447055 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:34:05.447274 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:34:05.451771 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:34:05.451926 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:34:05.473810 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:34:05.478581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:34:05.480301 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:34:05.480497 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:34:05.484559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:34:05.484838 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:34:05.494878 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:34:05.495058 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:34:05.502715 ignition[1010]: INFO : Ignition 2.19.0 Oct 31 00:34:05.502715 ignition[1010]: INFO : Stage: umount Oct 31 00:34:05.502715 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:34:05.502715 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:34:05.502715 ignition[1010]: INFO : umount: umount passed Oct 31 00:34:05.502715 ignition[1010]: INFO : Ignition finished successfully Oct 31 00:34:05.500421 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:34:05.502776 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:34:05.518518 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:34:05.521965 systemd[1]: Stopped target network.target - Network. Oct 31 00:34:05.525075 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:34:05.525202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:34:05.528506 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:34:05.528573 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:34:05.532107 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:34:05.532184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:34:05.535348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:34:05.535409 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:34:05.539353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:34:05.542640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:34:05.550528 systemd-networkd[771]: eth0: DHCPv6 lease lost Oct 31 00:34:05.552614 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:34:05.552757 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:34:05.557085 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:34:05.557313 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:34:05.562181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:34:05.562275 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:34:05.577715 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:34:05.580914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:34:05.581006 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:34:05.603198 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:34:05.603295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:34:05.605963 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:34:05.606020 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:34:05.610122 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:34:05.610178 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:34:05.611889 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:34:05.626594 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:34:05.626744 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:34:05.637374 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:34:05.637594 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:34:05.641263 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:34:05.641325 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:34:05.645021 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:34:05.645069 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:34:05.648631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:34:05.648705 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:34:05.653009 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:34:05.653096 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:34:05.655749 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:34:05.655811 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:34:05.673672 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:34:05.675566 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:34:05.675655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:34:05.682802 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 00:34:05.682863 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:34:05.686758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:34:05.686823 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:34:05.690995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:34:05.691058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:34:05.695207 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:34:05.695324 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:34:06.088720 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:34:06.088886 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:34:06.092210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:34:06.095298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:34:06.095358 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:34:06.108661 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:34:06.119564 systemd[1]: Switching root. Oct 31 00:34:06.169951 systemd-journald[193]: Journal stopped Oct 31 00:34:08.152908 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 31 00:34:08.152996 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:34:08.153011 kernel: SELinux: policy capability open_perms=1 Oct 31 00:34:08.153022 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:34:08.153037 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:34:08.153049 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:34:08.153060 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:34:08.153072 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:34:08.153083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:34:08.153101 kernel: audit: type=1403 audit(1761870847.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:34:08.153113 systemd[1]: Successfully loaded SELinux policy in 47.604ms. Oct 31 00:34:08.153139 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.667ms. Oct 31 00:34:08.153155 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:34:08.153168 systemd[1]: Detected virtualization kvm. Oct 31 00:34:08.153180 systemd[1]: Detected architecture x86-64. Oct 31 00:34:08.153192 systemd[1]: Detected first boot. Oct 31 00:34:08.153204 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:34:08.153218 zram_generator::config[1056]: No configuration found. Oct 31 00:34:08.153231 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:34:08.153243 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 00:34:08.153255 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 00:34:08.153271 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 00:34:08.153284 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 00:34:08.153296 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 00:34:08.153308 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 00:34:08.153320 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 00:34:08.153332 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 00:34:08.153345 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 00:34:08.153357 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 00:34:08.153372 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 00:34:08.153384 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:34:08.153396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:34:08.153408 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 00:34:08.153420 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 00:34:08.153433 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 00:34:08.153445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:34:08.153471 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 00:34:08.153483 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:34:08.153499 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 00:34:08.153511 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 00:34:08.153524 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 00:34:08.153536 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 00:34:08.153547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:34:08.153559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:34:08.153572 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:34:08.153584 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:34:08.153598 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 00:34:08.153610 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 00:34:08.153622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:34:08.153635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:34:08.153647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:34:08.153660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 00:34:08.153676 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 00:34:08.153688 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 00:34:08.153700 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 00:34:08.153715 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:08.153727 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 00:34:08.153740 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 00:34:08.153751 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 00:34:08.153765 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:34:08.153777 systemd[1]: Reached target machines.target - Containers. Oct 31 00:34:08.153789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 00:34:08.153801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:34:08.153816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:34:08.153828 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 00:34:08.153840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:34:08.153852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:34:08.153864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:34:08.153883 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 00:34:08.153895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:34:08.153907 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:34:08.153919 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 00:34:08.153935 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 00:34:08.153953 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 00:34:08.153965 kernel: fuse: init (API version 7.39) Oct 31 00:34:08.153976 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 00:34:08.153989 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:34:08.154001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:34:08.154013 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 00:34:08.154025 kernel: loop: module loaded Oct 31 00:34:08.154038 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 00:34:08.154053 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:34:08.154065 kernel: ACPI: bus type drm_connector registered Oct 31 00:34:08.154096 systemd-journald[1137]: Collecting audit messages is disabled. Oct 31 00:34:08.154119 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 00:34:08.154134 systemd[1]: Stopped verity-setup.service. Oct 31 00:34:08.154146 systemd-journald[1137]: Journal started Oct 31 00:34:08.154167 systemd-journald[1137]: Runtime Journal (/run/log/journal/167989bc901c4616a989cec2fe941816) is 6.0M, max 48.4M, 42.3M free. Oct 31 00:34:07.845783 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:34:07.871828 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 00:34:07.872422 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 00:34:08.159468 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:08.163823 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:34:08.164794 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 00:34:08.166774 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 00:34:08.168794 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 00:34:08.170667 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 00:34:08.172691 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 00:34:08.174736 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 00:34:08.176715 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 00:34:08.179078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:34:08.181600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:34:08.181787 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 00:34:08.184171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:34:08.184355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:34:08.186763 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:34:08.186958 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:34:08.189096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:34:08.189273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:34:08.191764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:34:08.191951 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 00:34:08.194225 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:34:08.194402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:34:08.196747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:34:08.198970 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 00:34:08.201516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 00:34:08.216952 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 00:34:08.226556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 00:34:08.229991 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 00:34:08.231955 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:34:08.231990 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:34:08.234817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 00:34:08.238115 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 00:34:08.241248 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 00:34:08.243128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:34:08.246859 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 00:34:08.251624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 00:34:08.254465 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:34:08.255705 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 00:34:08.258717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:34:08.260217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:34:08.266411 systemd-journald[1137]: Time spent on flushing to /var/log/journal/167989bc901c4616a989cec2fe941816 is 14.035ms for 948 entries. Oct 31 00:34:08.266411 systemd-journald[1137]: System Journal (/var/log/journal/167989bc901c4616a989cec2fe941816) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:34:08.292819 systemd-journald[1137]: Received client request to flush runtime journal. Oct 31 00:34:08.266134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 00:34:08.274655 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:34:08.279933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:34:08.283037 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 00:34:08.285888 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 00:34:08.289513 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 00:34:08.295180 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 00:34:08.301022 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 00:34:08.304547 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:34:08.305478 kernel: loop0: detected capacity change from 0 to 140768 Oct 31 00:34:08.319812 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 00:34:08.330975 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 00:34:08.332599 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:34:08.338780 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 00:34:08.340742 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Oct 31 00:34:08.340761 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Oct 31 00:34:08.348990 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:34:08.354548 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 00:34:08.380946 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 00:34:08.386119 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:34:08.387183 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 00:34:08.393479 kernel: loop1: detected capacity change from 0 to 224512 Oct 31 00:34:08.423260 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 00:34:08.432501 kernel: loop2: detected capacity change from 0 to 142488 Oct 31 00:34:08.433747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:34:08.478560 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 31 00:34:08.478581 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 31 00:34:08.485493 kernel: loop3: detected capacity change from 0 to 140768 Oct 31 00:34:08.486906 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:34:08.499478 kernel: loop4: detected capacity change from 0 to 224512 Oct 31 00:34:08.510472 kernel: loop5: detected capacity change from 0 to 142488 Oct 31 00:34:08.523085 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 31 00:34:08.524685 (sd-merge)[1196]: Merged extensions into '/usr'. Oct 31 00:34:08.529707 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 00:34:08.529907 systemd[1]: Reloading... Oct 31 00:34:08.609969 zram_generator::config[1222]: No configuration found. Oct 31 00:34:08.707444 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:34:08.773411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:34:08.822952 systemd[1]: Reloading finished in 291 ms. Oct 31 00:34:08.868223 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 00:34:08.870922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 00:34:08.884837 systemd[1]: Starting ensure-sysext.service... Oct 31 00:34:08.887872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:34:08.921589 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Oct 31 00:34:08.921781 systemd[1]: Reloading... Oct 31 00:34:08.957801 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:34:08.958192 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 00:34:08.959218 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:34:08.961593 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Oct 31 00:34:08.961684 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Oct 31 00:34:08.965619 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:34:08.965637 systemd-tmpfiles[1262]: Skipping /boot Oct 31 00:34:08.992276 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:34:08.992426 systemd-tmpfiles[1262]: Skipping /boot Oct 31 00:34:08.996143 zram_generator::config[1288]: No configuration found. Oct 31 00:34:09.132092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:34:09.186065 systemd[1]: Reloading finished in 263 ms. Oct 31 00:34:09.209286 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 00:34:09.227349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:34:09.239197 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:34:09.243074 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 00:34:09.246442 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 00:34:09.251164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:34:09.257767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:34:09.261679 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 00:34:09.272201 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 00:34:09.275823 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:09.276021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:34:09.284759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:34:09.290813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:34:09.298528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:34:09.300516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:34:09.300632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:09.301615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:34:09.301810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:34:09.304381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:34:09.304600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:34:09.307537 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 00:34:09.308669 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Oct 31 00:34:09.313316 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:34:09.313851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:34:09.325976 augenrules[1355]: No rules Oct 31 00:34:09.328363 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:34:09.331430 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 00:34:09.335309 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 00:34:09.348374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:34:09.352110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:09.352312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:34:09.360851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:34:09.366189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:34:09.378678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:34:09.380595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:34:09.383240 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:34:09.396002 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 00:34:09.398244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:09.400694 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 00:34:09.404228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:34:09.404440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:34:09.407326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:34:09.407891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:34:09.411223 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:34:09.411414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:34:09.431861 systemd[1]: Finished ensure-sysext.service. Oct 31 00:34:09.434422 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 00:34:09.438658 systemd-resolved[1332]: Positive Trust Anchors: Oct 31 00:34:09.438684 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:34:09.438748 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:34:09.446101 systemd-resolved[1332]: Defaulting to hostname 'linux'. Oct 31 00:34:09.453806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:34:09.462824 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:34:09.464890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:09.465059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:34:09.475778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:34:09.480727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:34:09.483742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:34:09.486960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:34:09.488961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:34:09.492062 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 00:34:09.493968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:34:09.494001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:34:09.494288 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 00:34:09.498276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:34:09.498496 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:34:09.501051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:34:09.501266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:34:09.503895 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:34:09.504101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:34:09.507905 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:34:09.508023 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:34:09.518481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Oct 31 00:34:09.520603 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:34:09.520880 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:34:09.556865 systemd-networkd[1387]: lo: Link UP Oct 31 00:34:09.556880 systemd-networkd[1387]: lo: Gained carrier Oct 31 00:34:09.558733 systemd-networkd[1387]: Enumeration completed Oct 31 00:34:09.559302 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:34:09.559308 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:34:09.559544 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:34:09.560394 systemd-networkd[1387]: eth0: Link UP Oct 31 00:34:09.560405 systemd-networkd[1387]: eth0: Gained carrier Oct 31 00:34:09.560418 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:34:09.561780 systemd[1]: Reached target network.target - Network. Oct 31 00:34:09.569664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 00:34:09.570464 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 00:34:09.578587 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:34:09.582470 kernel: ACPI: button: Power Button [PWRF] Oct 31 00:34:09.602628 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 00:34:09.603072 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 00:34:09.603346 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 00:34:09.608523 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 00:34:09.632306 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 00:34:10.145310 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:34:10.145360 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2025-10-31 00:34:10.145207 UTC. Oct 31 00:34:10.146003 systemd-resolved[1332]: Clock change detected. Flushing caches. Oct 31 00:34:10.147244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:34:10.150166 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 00:34:10.159855 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 00:34:10.173234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:34:10.176171 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 00:34:10.236201 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 00:34:10.306042 kernel: kvm_amd: TSC scaling supported Oct 31 00:34:10.306138 kernel: kvm_amd: Nested Virtualization enabled Oct 31 00:34:10.306158 kernel: kvm_amd: Nested Paging enabled Oct 31 00:34:10.307435 kernel: kvm_amd: LBR virtualization supported Oct 31 00:34:10.307469 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 31 00:34:10.308505 kernel: kvm_amd: Virtual GIF supported Oct 31 00:34:10.333636 kernel: EDAC MC: Ver: 3.0.0 Oct 31 00:34:10.365502 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 00:34:10.422853 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 00:34:10.425252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:34:10.438722 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:34:10.544423 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 00:34:10.547069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:34:10.548950 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:34:10.550952 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 00:34:10.553046 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 00:34:10.555470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 00:34:10.557412 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 00:34:10.559476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 00:34:10.561521 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:34:10.561559 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:34:10.563073 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:34:10.565548 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 00:34:10.569178 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 00:34:10.582128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 00:34:10.586494 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 00:34:10.589337 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 00:34:10.591370 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:34:10.593043 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:34:10.594765 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:34:10.594826 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:34:10.602744 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 00:34:10.607225 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 00:34:10.610647 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 00:34:10.614630 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:34:10.615838 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 00:34:10.617619 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 00:34:10.621707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 00:34:10.625233 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 00:34:10.627053 jq[1438]: false Oct 31 00:34:10.637178 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 00:34:10.642833 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 00:34:10.648776 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 00:34:10.653277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:34:10.653876 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 00:34:10.656670 dbus-daemon[1437]: [system] SELinux support is enabled Oct 31 00:34:10.654818 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 00:34:10.658811 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 00:34:10.661695 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 00:34:10.667799 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:34:10.668124 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 00:34:10.668883 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 00:34:10.674454 extend-filesystems[1439]: Found loop3 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found loop4 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found loop5 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found sr0 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda1 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda2 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda3 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found usr Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda4 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda6 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda7 Oct 31 00:34:10.674454 extend-filesystems[1439]: Found vda9 Oct 31 00:34:10.674454 extend-filesystems[1439]: Checking size of /dev/vda9 Oct 31 00:34:10.708390 extend-filesystems[1439]: Resized partition /dev/vda9 Oct 31 00:34:10.680078 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:34:10.713949 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Oct 31 00:34:10.680303 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 00:34:10.683572 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:34:10.717190 tar[1456]: linux-amd64/LICENSE Oct 31 00:34:10.717190 tar[1456]: linux-amd64/helm Oct 31 00:34:10.683839 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 00:34:10.701129 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:34:10.717727 jq[1449]: true Oct 31 00:34:10.701162 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 00:34:10.701458 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:34:10.701477 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 00:34:10.722549 jq[1469]: true Oct 31 00:34:10.725292 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:34:10.725330 update_engine[1448]: I20251031 00:34:10.724918 1448 main.cc:92] Flatcar Update Engine starting Oct 31 00:34:10.727039 update_engine[1448]: I20251031 00:34:10.727006 1448 update_check_scheduler.cc:74] Next update check in 5m19s Oct 31 00:34:10.727847 systemd[1]: Started update-engine.service - Update Engine. Oct 31 00:34:10.733073 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 00:34:10.734896 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 00:34:10.738483 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1369) Oct 31 00:34:10.768027 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:34:10.845265 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 00:34:10.847592 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:34:10.847592 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:34:10.847592 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:34:10.845312 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 00:34:10.956712 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Oct 31 00:34:10.848738 systemd-logind[1446]: New seat seat0. Oct 31 00:34:10.851528 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:34:10.851783 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 00:34:10.862152 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 00:34:10.999791 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:34:11.002463 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 00:34:11.006246 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 00:34:11.013377 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:34:11.070349 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:34:11.101371 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 00:34:11.167326 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 00:34:11.177568 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:34:11.177888 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 00:34:11.188879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 00:34:11.221119 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 00:34:11.223761 systemd-networkd[1387]: eth0: Gained IPv6LL Oct 31 00:34:11.236270 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 00:34:11.240049 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 00:34:11.242964 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 00:34:11.246292 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 00:34:11.250184 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 00:34:11.295102 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 00:34:11.300305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:11.308786 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 00:34:11.337831 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 00:34:11.338099 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 00:34:11.341968 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 00:34:11.344232 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 00:34:11.346318 containerd[1461]: time="2025-10-31T00:34:11.346032597Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 00:34:11.384360 containerd[1461]: time="2025-10-31T00:34:11.384261660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.386911 containerd[1461]: time="2025-10-31T00:34:11.386858841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:34:11.386911 containerd[1461]: time="2025-10-31T00:34:11.386888887Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:34:11.386911 containerd[1461]: time="2025-10-31T00:34:11.386904927Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:34:11.387162 containerd[1461]: time="2025-10-31T00:34:11.387137754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 00:34:11.387162 containerd[1461]: time="2025-10-31T00:34:11.387161268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387266 containerd[1461]: time="2025-10-31T00:34:11.387242741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387266 containerd[1461]: time="2025-10-31T00:34:11.387260444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387509 containerd[1461]: time="2025-10-31T00:34:11.387484484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387509 containerd[1461]: time="2025-10-31T00:34:11.387504251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387580 containerd[1461]: time="2025-10-31T00:34:11.387518217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387580 containerd[1461]: time="2025-10-31T00:34:11.387528206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.387710 containerd[1461]: time="2025-10-31T00:34:11.387687875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.388013 containerd[1461]: time="2025-10-31T00:34:11.387979633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:34:11.388145 containerd[1461]: time="2025-10-31T00:34:11.388121208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:34:11.388145 containerd[1461]: time="2025-10-31T00:34:11.388140885Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:34:11.388275 containerd[1461]: time="2025-10-31T00:34:11.388255440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:34:11.388346 containerd[1461]: time="2025-10-31T00:34:11.388327906Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395246020Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395329005Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395346438Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395375773Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395397674Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395620903Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.395933739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396094521Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396116753Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396131560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396149895Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396166035Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396181474Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.397623 containerd[1461]: time="2025-10-31T00:34:11.396199127Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396216970Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396239272Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396257446Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396276452Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396310917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396333178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396351954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396373254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396390656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396408209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396423969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396440981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396458123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398025 containerd[1461]: time="2025-10-31T00:34:11.396481166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396499049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396515681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396537522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396558952Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396586534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396644352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396664760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396752956Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396784695Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396802038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396819460Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396834198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396852112Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 00:34:11.398378 containerd[1461]: time="2025-10-31T00:34:11.396867100Z" level=info msg="NRI interface is disabled by configuration." Oct 31 00:34:11.398881 containerd[1461]: time="2025-10-31T00:34:11.396882188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:34:11.398911 containerd[1461]: time="2025-10-31T00:34:11.397250208Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:34:11.398911 containerd[1461]: time="2025-10-31T00:34:11.397333595Z" level=info msg="Connect containerd service" Oct 31 00:34:11.398911 containerd[1461]: time="2025-10-31T00:34:11.397397735Z" level=info msg="using legacy CRI server" Oct 31 00:34:11.398911 containerd[1461]: time="2025-10-31T00:34:11.397410068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 00:34:11.398911 containerd[1461]: time="2025-10-31T00:34:11.397520886Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:34:11.400401 containerd[1461]: time="2025-10-31T00:34:11.400327058Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:34:11.400911 containerd[1461]: time="2025-10-31T00:34:11.400779827Z" level=info msg="Start subscribing containerd event" Oct 31 00:34:11.400970 containerd[1461]: time="2025-10-31T00:34:11.400941841Z" level=info msg="Start recovering state" Oct 31 00:34:11.401033 containerd[1461]: time="2025-10-31T00:34:11.400870027Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:34:11.401171 containerd[1461]: time="2025-10-31T00:34:11.401151014Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:34:11.406788 containerd[1461]: time="2025-10-31T00:34:11.406752629Z" level=info msg="Start event monitor" Oct 31 00:34:11.406844 containerd[1461]: time="2025-10-31T00:34:11.406808233Z" level=info msg="Start snapshots syncer" Oct 31 00:34:11.406844 containerd[1461]: time="2025-10-31T00:34:11.406828711Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:34:11.406977 containerd[1461]: time="2025-10-31T00:34:11.406846915Z" level=info msg="Start streaming server" Oct 31 00:34:11.434203 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 00:34:11.436175 containerd[1461]: time="2025-10-31T00:34:11.434418542Z" level=info msg="containerd successfully booted in 0.093931s" Oct 31 00:34:11.590572 tar[1456]: linux-amd64/README.md Oct 31 00:34:11.607901 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 00:34:12.675998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:12.679051 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 00:34:12.681372 systemd[1]: Startup finished in 1.215s (kernel) + 8.155s (initrd) + 5.091s (userspace) = 14.462s. Oct 31 00:34:12.681958 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:34:13.353912 kubelet[1550]: E1031 00:34:13.353735 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:34:13.358216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:34:13.358445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:34:13.358806 systemd[1]: kubelet.service: Consumed 1.826s CPU time. Oct 31 00:34:14.019118 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 00:34:14.020549 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:38214.service - OpenSSH per-connection server daemon (10.0.0.1:38214). Oct 31 00:34:14.064878 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 38214 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:14.067089 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:14.077554 systemd-logind[1446]: New session 1 of user core. Oct 31 00:34:14.079375 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 00:34:14.090036 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 00:34:14.105644 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 00:34:14.119982 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 00:34:14.123354 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:34:14.253397 systemd[1567]: Queued start job for default target default.target. Oct 31 00:34:14.264710 systemd[1567]: Created slice app.slice - User Application Slice. Oct 31 00:34:14.264754 systemd[1567]: Reached target paths.target - Paths. Oct 31 00:34:14.264776 systemd[1567]: Reached target timers.target - Timers. Oct 31 00:34:14.267146 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 00:34:14.281379 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 00:34:14.281646 systemd[1567]: Reached target sockets.target - Sockets. Oct 31 00:34:14.281675 systemd[1567]: Reached target basic.target - Basic System. Oct 31 00:34:14.281757 systemd[1567]: Reached target default.target - Main User Target. Oct 31 00:34:14.281803 systemd[1567]: Startup finished in 150ms. Oct 31 00:34:14.282429 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 00:34:14.284514 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 00:34:14.351311 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:38222.service - OpenSSH per-connection server daemon (10.0.0.1:38222). Oct 31 00:34:14.392725 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 38222 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:14.394583 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:14.399468 systemd-logind[1446]: New session 2 of user core. Oct 31 00:34:14.408783 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 00:34:14.489515 sshd[1578]: pam_unix(sshd:session): session closed for user core Oct 31 00:34:14.506648 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:38222.service: Deactivated successfully. Oct 31 00:34:14.508436 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:34:14.510114 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:34:14.522310 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:38238.service - OpenSSH per-connection server daemon (10.0.0.1:38238). Oct 31 00:34:14.523994 systemd-logind[1446]: Removed session 2. Oct 31 00:34:14.556526 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 38238 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:14.558739 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:14.563597 systemd-logind[1446]: New session 3 of user core. Oct 31 00:34:14.571749 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 00:34:14.623693 sshd[1585]: pam_unix(sshd:session): session closed for user core Oct 31 00:34:14.635996 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:38238.service: Deactivated successfully. Oct 31 00:34:14.638656 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:34:14.640713 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:34:14.648108 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:38246.service - OpenSSH per-connection server daemon (10.0.0.1:38246). Oct 31 00:34:14.649437 systemd-logind[1446]: Removed session 3. Oct 31 00:34:14.681217 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 38246 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:14.683166 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:14.687615 systemd-logind[1446]: New session 4 of user core. Oct 31 00:34:14.694762 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 00:34:14.751142 sshd[1592]: pam_unix(sshd:session): session closed for user core Oct 31 00:34:14.764331 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:38246.service: Deactivated successfully. Oct 31 00:34:14.766071 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:34:14.767633 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:34:14.781872 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). Oct 31 00:34:14.782755 systemd-logind[1446]: Removed session 4. Oct 31 00:34:14.809711 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:14.811258 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:14.815704 systemd-logind[1446]: New session 5 of user core. Oct 31 00:34:14.831716 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 00:34:14.890571 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:34:14.890955 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:34:14.908550 sudo[1602]: pam_unix(sudo:session): session closed for user root Oct 31 00:34:14.910654 sshd[1599]: pam_unix(sshd:session): session closed for user core Oct 31 00:34:14.923512 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:38262.service: Deactivated successfully. Oct 31 00:34:14.925362 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:34:14.926683 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:34:14.937868 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:38270.service - OpenSSH per-connection server daemon (10.0.0.1:38270). Oct 31 00:34:14.938875 systemd-logind[1446]: Removed session 5. Oct 31 00:34:14.967029 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 38270 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:14.969507 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:14.974029 systemd-logind[1446]: New session 6 of user core. Oct 31 00:34:14.983749 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 00:34:15.040789 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:34:15.041140 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:34:15.045286 sudo[1611]: pam_unix(sudo:session): session closed for user root Oct 31 00:34:15.056628 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:34:15.057286 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:34:15.079838 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 00:34:15.082155 auditctl[1614]: No rules Oct 31 00:34:15.083498 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:34:15.083806 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 00:34:15.085640 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:34:15.123458 augenrules[1632]: No rules Oct 31 00:34:15.125627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:34:15.127396 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 31 00:34:15.129736 sshd[1607]: pam_unix(sshd:session): session closed for user core Oct 31 00:34:15.142443 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:38270.service: Deactivated successfully. Oct 31 00:34:15.144407 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:34:15.146555 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:34:15.154181 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:38280.service - OpenSSH per-connection server daemon (10.0.0.1:38280). Oct 31 00:34:15.155707 systemd-logind[1446]: Removed session 6. Oct 31 00:34:15.185951 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 38280 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:34:15.188296 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:34:15.193400 systemd-logind[1446]: New session 7 of user core. Oct 31 00:34:15.202932 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 00:34:15.260266 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:34:15.260722 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:34:15.739939 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 00:34:15.740201 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 00:34:16.330993 dockerd[1662]: time="2025-10-31T00:34:16.330877621Z" level=info msg="Starting up" Oct 31 00:34:17.359852 dockerd[1662]: time="2025-10-31T00:34:17.359769069Z" level=info msg="Loading containers: start." Oct 31 00:34:17.517662 kernel: Initializing XFRM netlink socket Oct 31 00:34:17.616817 systemd-networkd[1387]: docker0: Link UP Oct 31 00:34:17.999342 dockerd[1662]: time="2025-10-31T00:34:17.999219708Z" level=info msg="Loading containers: done." Oct 31 00:34:18.034843 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3161153436-merged.mount: Deactivated successfully. Oct 31 00:34:18.208506 dockerd[1662]: time="2025-10-31T00:34:18.208403291Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:34:18.208708 dockerd[1662]: time="2025-10-31T00:34:18.208592937Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 00:34:18.208880 dockerd[1662]: time="2025-10-31T00:34:18.208827677Z" level=info msg="Daemon has completed initialization" Oct 31 00:34:18.299706 dockerd[1662]: time="2025-10-31T00:34:18.299453794Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:34:18.299861 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 00:34:19.491287 containerd[1461]: time="2025-10-31T00:34:19.491225992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 00:34:20.468652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106469143.mount: Deactivated successfully. Oct 31 00:34:22.501667 containerd[1461]: time="2025-10-31T00:34:22.501496393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:22.502995 containerd[1461]: time="2025-10-31T00:34:22.502930493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 31 00:34:22.504493 containerd[1461]: time="2025-10-31T00:34:22.504405249Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:22.508692 containerd[1461]: time="2025-10-31T00:34:22.508642144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:22.510927 containerd[1461]: time="2025-10-31T00:34:22.510871645Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.019585501s" Oct 31 00:34:22.510989 containerd[1461]: time="2025-10-31T00:34:22.510931638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 31 00:34:22.512147 containerd[1461]: time="2025-10-31T00:34:22.512099568Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 00:34:23.608859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:34:23.621768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:23.866436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:23.872514 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:34:24.668624 kubelet[1878]: E1031 00:34:24.668515 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:34:24.675750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:34:24.675951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:34:24.676289 systemd[1]: kubelet.service: Consumed 1.037s CPU time. Oct 31 00:34:26.057534 containerd[1461]: time="2025-10-31T00:34:26.057447481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:26.058177 containerd[1461]: time="2025-10-31T00:34:26.058141513Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 31 00:34:26.059501 containerd[1461]: time="2025-10-31T00:34:26.059452922Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:26.062519 containerd[1461]: time="2025-10-31T00:34:26.062472495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:26.063694 containerd[1461]: time="2025-10-31T00:34:26.063638722Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 3.551481496s" Oct 31 00:34:26.063764 containerd[1461]: time="2025-10-31T00:34:26.063700688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 31 00:34:26.064241 containerd[1461]: time="2025-10-31T00:34:26.064218639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 00:34:27.559623 containerd[1461]: time="2025-10-31T00:34:27.559541343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:27.560207 containerd[1461]: time="2025-10-31T00:34:27.560146749Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 31 00:34:27.561280 containerd[1461]: time="2025-10-31T00:34:27.561234899Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:27.564248 containerd[1461]: time="2025-10-31T00:34:27.564216571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:27.565350 containerd[1461]: time="2025-10-31T00:34:27.565308719Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.501060164s" Oct 31 00:34:27.565401 containerd[1461]: time="2025-10-31T00:34:27.565349025Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 31 00:34:27.565926 containerd[1461]: time="2025-10-31T00:34:27.565906931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 00:34:29.076045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840692632.mount: Deactivated successfully. Oct 31 00:34:29.535979 containerd[1461]: time="2025-10-31T00:34:29.535825862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:29.537078 containerd[1461]: time="2025-10-31T00:34:29.537032245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 31 00:34:29.538381 containerd[1461]: time="2025-10-31T00:34:29.538338314Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:29.541514 containerd[1461]: time="2025-10-31T00:34:29.541471650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:29.542293 containerd[1461]: time="2025-10-31T00:34:29.542255460Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.976322731s" Oct 31 00:34:29.542293 containerd[1461]: time="2025-10-31T00:34:29.542285286Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 31 00:34:29.543104 containerd[1461]: time="2025-10-31T00:34:29.542922952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 00:34:30.182354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747704484.mount: Deactivated successfully. Oct 31 00:34:31.223732 containerd[1461]: time="2025-10-31T00:34:31.223637752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:31.224555 containerd[1461]: time="2025-10-31T00:34:31.224487165Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 31 00:34:31.226488 containerd[1461]: time="2025-10-31T00:34:31.226412125Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:31.230971 containerd[1461]: time="2025-10-31T00:34:31.230906203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:31.232475 containerd[1461]: time="2025-10-31T00:34:31.232220277Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.689263442s" Oct 31 00:34:31.232475 containerd[1461]: time="2025-10-31T00:34:31.232465908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 31 00:34:31.233264 containerd[1461]: time="2025-10-31T00:34:31.233075962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 00:34:32.062401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372596967.mount: Deactivated successfully. Oct 31 00:34:32.069568 containerd[1461]: time="2025-10-31T00:34:32.069469158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:32.070303 containerd[1461]: time="2025-10-31T00:34:32.070244702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 00:34:32.071761 containerd[1461]: time="2025-10-31T00:34:32.071705912Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:32.074641 containerd[1461]: time="2025-10-31T00:34:32.074607183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:32.075517 containerd[1461]: time="2025-10-31T00:34:32.075477025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 842.358693ms" Oct 31 00:34:32.075517 containerd[1461]: time="2025-10-31T00:34:32.075510157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 00:34:32.076110 containerd[1461]: time="2025-10-31T00:34:32.076073082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 00:34:32.944008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666790623.mount: Deactivated successfully. Oct 31 00:34:34.926388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:34:34.937886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:35.900637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:35.906104 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:34:36.187338 kubelet[2018]: E1031 00:34:36.187097 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:34:36.191821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:34:36.192036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:34:36.344437 containerd[1461]: time="2025-10-31T00:34:36.344374722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:36.345187 containerd[1461]: time="2025-10-31T00:34:36.345129267Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 31 00:34:36.382781 containerd[1461]: time="2025-10-31T00:34:36.382679346Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:36.427480 containerd[1461]: time="2025-10-31T00:34:36.427391608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:36.428792 containerd[1461]: time="2025-10-31T00:34:36.428746769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.352632359s" Oct 31 00:34:36.428792 containerd[1461]: time="2025-10-31T00:34:36.428782196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 31 00:34:39.077268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:39.086829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:39.118284 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit session-7.scope)... Oct 31 00:34:39.118310 systemd[1]: Reloading... Oct 31 00:34:39.204660 zram_generator::config[2093]: No configuration found. Oct 31 00:34:39.567388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:34:39.680782 systemd[1]: Reloading finished in 562 ms. Oct 31 00:34:39.753962 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 00:34:39.754078 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 00:34:39.754421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:39.756985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:39.993679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:40.002297 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:34:40.088303 kubelet[2144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:34:40.088303 kubelet[2144]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:34:40.088303 kubelet[2144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:34:40.088905 kubelet[2144]: I1031 00:34:40.088409 2144 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:34:40.682622 kubelet[2144]: I1031 00:34:40.682531 2144 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:34:40.682622 kubelet[2144]: I1031 00:34:40.682585 2144 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:34:40.683092 kubelet[2144]: I1031 00:34:40.683060 2144 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:34:40.708636 kubelet[2144]: E1031 00:34:40.708568 2144 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:40.711409 kubelet[2144]: I1031 00:34:40.711363 2144 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:34:40.721965 kubelet[2144]: E1031 00:34:40.721892 2144 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:34:40.721965 kubelet[2144]: I1031 00:34:40.721952 2144 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:34:41.037085 kubelet[2144]: I1031 00:34:41.036908 2144 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:34:41.037445 kubelet[2144]: I1031 00:34:41.037395 2144 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:34:41.037800 kubelet[2144]: I1031 00:34:41.037440 2144 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:34:41.038633 kubelet[2144]: I1031 00:34:41.038586 2144 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:34:41.038633 kubelet[2144]: I1031 00:34:41.038623 2144 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:34:41.038825 kubelet[2144]: I1031 00:34:41.038800 2144 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:34:41.043019 kubelet[2144]: I1031 00:34:41.042963 2144 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:34:41.043019 kubelet[2144]: I1031 00:34:41.043012 2144 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:34:41.043183 kubelet[2144]: I1031 00:34:41.043060 2144 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:34:41.043183 kubelet[2144]: I1031 00:34:41.043103 2144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:34:41.047280 kubelet[2144]: I1031 00:34:41.046655 2144 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:34:41.047280 kubelet[2144]: I1031 00:34:41.047028 2144 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:34:41.047280 kubelet[2144]: W1031 00:34:41.047110 2144 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:34:41.051971 kubelet[2144]: W1031 00:34:41.051905 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:41.052212 kubelet[2144]: E1031 00:34:41.052180 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:41.052832 kubelet[2144]: W1031 00:34:41.052755 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:41.052908 kubelet[2144]: E1031 00:34:41.052842 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:41.053262 kubelet[2144]: I1031 00:34:41.053211 2144 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:34:41.053262 kubelet[2144]: I1031 00:34:41.053257 2144 server.go:1287] "Started kubelet" Oct 31 00:34:41.053575 kubelet[2144]: I1031 00:34:41.053513 2144 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:34:41.056911 kubelet[2144]: I1031 00:34:41.055741 2144 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:34:41.056911 kubelet[2144]: I1031 00:34:41.055913 2144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:34:41.059076 kubelet[2144]: I1031 00:34:41.058350 2144 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:34:41.062027 kubelet[2144]: I1031 00:34:41.059275 2144 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:34:41.062404 kubelet[2144]: I1031 00:34:41.062374 2144 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:34:41.065353 kubelet[2144]: E1031 00:34:41.063063 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:34:41.065353 kubelet[2144]: I1031 00:34:41.063120 2144 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:34:41.065353 kubelet[2144]: I1031 00:34:41.063413 2144 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:34:41.065353 kubelet[2144]: I1031 00:34:41.063468 2144 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:34:41.065353 kubelet[2144]: W1031 00:34:41.063948 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:41.065353 kubelet[2144]: E1031 00:34:41.064008 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:41.065353 kubelet[2144]: E1031 00:34:41.064252 2144 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:34:41.065353 kubelet[2144]: E1031 00:34:41.064566 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Oct 31 00:34:41.065747 kubelet[2144]: E1031 00:34:41.060809 2144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736c3b454841fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:34:41.053229565 +0000 UTC m=+1.023264226,LastTimestamp:2025-10-31 00:34:41.053229565 +0000 UTC m=+1.023264226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:34:41.066211 kubelet[2144]: I1031 00:34:41.066150 2144 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:34:41.066301 kubelet[2144]: I1031 00:34:41.066272 2144 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:34:41.067648 kubelet[2144]: I1031 00:34:41.067622 2144 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:34:41.085475 kubelet[2144]: I1031 00:34:41.083804 2144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:34:41.086274 kubelet[2144]: I1031 00:34:41.086239 2144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:34:41.086322 kubelet[2144]: I1031 00:34:41.086277 2144 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:34:41.086322 kubelet[2144]: I1031 00:34:41.086302 2144 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:34:41.086322 kubelet[2144]: I1031 00:34:41.086310 2144 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:34:41.086427 kubelet[2144]: E1031 00:34:41.086364 2144 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:34:41.087734 kubelet[2144]: W1031 00:34:41.087465 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:41.087734 kubelet[2144]: E1031 00:34:41.087507 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:41.091262 kubelet[2144]: I1031 00:34:41.091226 2144 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:34:41.091262 kubelet[2144]: I1031 00:34:41.091241 2144 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:34:41.091262 kubelet[2144]: I1031 00:34:41.091264 2144 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:34:41.164114 kubelet[2144]: E1031 00:34:41.164034 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:34:41.187425 kubelet[2144]: E1031 00:34:41.187329 2144 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 00:34:41.251903 kubelet[2144]: I1031 00:34:41.251829 2144 policy_none.go:49] "None policy: Start" Oct 31 00:34:41.251903 kubelet[2144]: I1031 00:34:41.251872 2144 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:34:41.251903 kubelet[2144]: I1031 00:34:41.251893 2144 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:34:41.259717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 00:34:41.264244 kubelet[2144]: E1031 00:34:41.264214 2144 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:34:41.265789 kubelet[2144]: E1031 00:34:41.265736 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Oct 31 00:34:41.278292 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 00:34:41.282014 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 00:34:41.292176 kubelet[2144]: I1031 00:34:41.292020 2144 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:34:41.292459 kubelet[2144]: I1031 00:34:41.292422 2144 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:34:41.292513 kubelet[2144]: I1031 00:34:41.292459 2144 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:34:41.293300 kubelet[2144]: I1031 00:34:41.293013 2144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:34:41.294684 kubelet[2144]: E1031 00:34:41.294657 2144 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:34:41.294827 kubelet[2144]: E1031 00:34:41.294802 2144 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:34:41.393284 kubelet[2144]: I1031 00:34:41.393223 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:34:41.393708 kubelet[2144]: E1031 00:34:41.393665 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Oct 31 00:34:41.396821 systemd[1]: Created slice kubepods-burstable-pod467bb869cbaedfadeae519d778fcd2d5.slice - libcontainer container kubepods-burstable-pod467bb869cbaedfadeae519d778fcd2d5.slice. Oct 31 00:34:41.402786 kubelet[2144]: E1031 00:34:41.402744 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:41.406016 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 31 00:34:41.410627 kubelet[2144]: E1031 00:34:41.408573 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:41.411799 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 31 00:34:41.413680 kubelet[2144]: E1031 00:34:41.413643 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:41.466194 kubelet[2144]: I1031 00:34:41.466139 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:41.466271 kubelet[2144]: I1031 00:34:41.466213 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/467bb869cbaedfadeae519d778fcd2d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"467bb869cbaedfadeae519d778fcd2d5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:41.466271 kubelet[2144]: I1031 00:34:41.466242 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/467bb869cbaedfadeae519d778fcd2d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"467bb869cbaedfadeae519d778fcd2d5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:41.466271 kubelet[2144]: I1031 00:34:41.466266 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/467bb869cbaedfadeae519d778fcd2d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"467bb869cbaedfadeae519d778fcd2d5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:41.466360 kubelet[2144]: I1031 00:34:41.466290 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:41.466360 kubelet[2144]: I1031 00:34:41.466313 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:41.466360 kubelet[2144]: I1031 00:34:41.466334 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:41.466360 kubelet[2144]: I1031 00:34:41.466355 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:41.466450 kubelet[2144]: I1031 00:34:41.466371 2144 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:41.595500 kubelet[2144]: I1031 00:34:41.595361 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:34:41.595875 kubelet[2144]: E1031 00:34:41.595830 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Oct 31 00:34:41.667359 kubelet[2144]: E1031 00:34:41.667296 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Oct 31 00:34:41.703843 kubelet[2144]: E1031 00:34:41.703765 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:41.704772 containerd[1461]: time="2025-10-31T00:34:41.704713049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:467bb869cbaedfadeae519d778fcd2d5,Namespace:kube-system,Attempt:0,}" Oct 31 00:34:41.710235 kubelet[2144]: E1031 00:34:41.710165 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:41.711108 containerd[1461]: time="2025-10-31T00:34:41.711039965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 00:34:41.714548 kubelet[2144]: E1031 00:34:41.714497 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:41.715362 containerd[1461]: time="2025-10-31T00:34:41.715285536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 00:34:41.963669 kubelet[2144]: W1031 00:34:41.963486 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:41.963669 kubelet[2144]: E1031 00:34:41.963547 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:41.998384 kubelet[2144]: I1031 00:34:41.998340 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:34:41.998857 kubelet[2144]: E1031 00:34:41.998789 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Oct 31 00:34:42.040394 kubelet[2144]: W1031 00:34:42.040322 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:42.040472 kubelet[2144]: E1031 00:34:42.040396 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:42.254334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863764454.mount: Deactivated successfully. Oct 31 00:34:42.264209 containerd[1461]: time="2025-10-31T00:34:42.264141988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:34:42.266281 containerd[1461]: time="2025-10-31T00:34:42.266207933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:34:42.267390 containerd[1461]: time="2025-10-31T00:34:42.267352790Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:34:42.268431 containerd[1461]: time="2025-10-31T00:34:42.268379776Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:34:42.269405 containerd[1461]: time="2025-10-31T00:34:42.269373770Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:34:42.270219 containerd[1461]: time="2025-10-31T00:34:42.270177848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:34:42.273399 containerd[1461]: time="2025-10-31T00:34:42.273360065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 31 00:34:42.276083 containerd[1461]: time="2025-10-31T00:34:42.276015265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:34:42.277613 containerd[1461]: time="2025-10-31T00:34:42.277542319Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.139482ms" Oct 31 00:34:42.278514 containerd[1461]: time="2025-10-31T00:34:42.278474847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.62954ms" Oct 31 00:34:42.281556 containerd[1461]: time="2025-10-31T00:34:42.281524697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.367032ms" Oct 31 00:34:42.468570 kubelet[2144]: E1031 00:34:42.468490 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Oct 31 00:34:42.494810 kubelet[2144]: W1031 00:34:42.494700 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:42.494991 kubelet[2144]: E1031 00:34:42.494862 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:42.590714 kubelet[2144]: W1031 00:34:42.590429 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:42.590714 kubelet[2144]: E1031 00:34:42.590538 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:42.801466 kubelet[2144]: I1031 00:34:42.801405 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:34:42.801912 kubelet[2144]: E1031 00:34:42.801855 2144 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Oct 31 00:34:43.040183 kubelet[2144]: E1031 00:34:43.039915 2144 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:43.323370 containerd[1461]: time="2025-10-31T00:34:43.322844863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:34:43.323370 containerd[1461]: time="2025-10-31T00:34:43.322957230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:34:43.323370 containerd[1461]: time="2025-10-31T00:34:43.322971507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:43.323370 containerd[1461]: time="2025-10-31T00:34:43.323089976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:43.483647 kubelet[2144]: E1031 00:34:43.461970 2144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736c3b454841fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:34:41.053229565 +0000 UTC m=+1.023264226,LastTimestamp:2025-10-31 00:34:41.053229565 +0000 UTC m=+1.023264226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:34:43.492665 containerd[1461]: time="2025-10-31T00:34:43.492443194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:34:43.492665 containerd[1461]: time="2025-10-31T00:34:43.492545642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:34:43.492665 containerd[1461]: time="2025-10-31T00:34:43.492563116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:43.493032 containerd[1461]: time="2025-10-31T00:34:43.492698748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:43.502490 containerd[1461]: time="2025-10-31T00:34:43.502281556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:34:43.502490 containerd[1461]: time="2025-10-31T00:34:43.502357273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:34:43.502490 containerd[1461]: time="2025-10-31T00:34:43.502370157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:43.502767 containerd[1461]: time="2025-10-31T00:34:43.502462877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:43.524943 systemd[1]: Started cri-containerd-138435db2e70ef2f97dfe9e43c60eae5e0ac997e855c4c07a3b1d76eaeb48205.scope - libcontainer container 138435db2e70ef2f97dfe9e43c60eae5e0ac997e855c4c07a3b1d76eaeb48205. Oct 31 00:34:43.576804 systemd[1]: Started cri-containerd-61b001c355c9a5bbdd4c491d676a0c6a5adb637b053c4224b37001d26c06da45.scope - libcontainer container 61b001c355c9a5bbdd4c491d676a0c6a5adb637b053c4224b37001d26c06da45. Oct 31 00:34:43.579543 systemd[1]: Started cri-containerd-85bdc2a8f447bef115cf0bce70974f02d99134541ae02f04e8b5066893f1bf44.scope - libcontainer container 85bdc2a8f447bef115cf0bce70974f02d99134541ae02f04e8b5066893f1bf44. Oct 31 00:34:43.596201 containerd[1461]: time="2025-10-31T00:34:43.596138411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"138435db2e70ef2f97dfe9e43c60eae5e0ac997e855c4c07a3b1d76eaeb48205\"" Oct 31 00:34:43.597611 kubelet[2144]: E1031 00:34:43.597564 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:43.600687 containerd[1461]: time="2025-10-31T00:34:43.600650636Z" level=info msg="CreateContainer within sandbox \"138435db2e70ef2f97dfe9e43c60eae5e0ac997e855c4c07a3b1d76eaeb48205\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:34:43.620552 containerd[1461]: time="2025-10-31T00:34:43.620503930Z" level=info msg="CreateContainer within sandbox \"138435db2e70ef2f97dfe9e43c60eae5e0ac997e855c4c07a3b1d76eaeb48205\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e041a435e9d0af067295ae135c555bc23389ff9921b67a65cbd07b48ba24b819\"" Oct 31 00:34:43.622035 containerd[1461]: time="2025-10-31T00:34:43.621344584Z" level=info msg="StartContainer for \"e041a435e9d0af067295ae135c555bc23389ff9921b67a65cbd07b48ba24b819\"" Oct 31 00:34:43.690099 containerd[1461]: time="2025-10-31T00:34:43.690034862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:467bb869cbaedfadeae519d778fcd2d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"85bdc2a8f447bef115cf0bce70974f02d99134541ae02f04e8b5066893f1bf44\"" Oct 31 00:34:43.690800 containerd[1461]: time="2025-10-31T00:34:43.690772547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b001c355c9a5bbdd4c491d676a0c6a5adb637b053c4224b37001d26c06da45\"" Oct 31 00:34:43.691555 kubelet[2144]: E1031 00:34:43.691518 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:43.691555 kubelet[2144]: E1031 00:34:43.691518 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:43.693688 containerd[1461]: time="2025-10-31T00:34:43.693593505Z" level=info msg="CreateContainer within sandbox \"85bdc2a8f447bef115cf0bce70974f02d99134541ae02f04e8b5066893f1bf44\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:34:43.693910 containerd[1461]: time="2025-10-31T00:34:43.693752432Z" level=info msg="CreateContainer within sandbox \"61b001c355c9a5bbdd4c491d676a0c6a5adb637b053c4224b37001d26c06da45\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:34:43.710781 systemd[1]: Started cri-containerd-e041a435e9d0af067295ae135c555bc23389ff9921b67a65cbd07b48ba24b819.scope - libcontainer container e041a435e9d0af067295ae135c555bc23389ff9921b67a65cbd07b48ba24b819. Oct 31 00:34:43.720796 containerd[1461]: time="2025-10-31T00:34:43.720734545Z" level=info msg="CreateContainer within sandbox \"85bdc2a8f447bef115cf0bce70974f02d99134541ae02f04e8b5066893f1bf44\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2b1aa19501a8596848cc493bf6e61332f10c0e41cc49b5074a6494c91487361b\"" Oct 31 00:34:43.721381 containerd[1461]: time="2025-10-31T00:34:43.721346918Z" level=info msg="StartContainer for \"2b1aa19501a8596848cc493bf6e61332f10c0e41cc49b5074a6494c91487361b\"" Oct 31 00:34:43.724866 containerd[1461]: time="2025-10-31T00:34:43.724811379Z" level=info msg="CreateContainer within sandbox \"61b001c355c9a5bbdd4c491d676a0c6a5adb637b053c4224b37001d26c06da45\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fdc9ef4b7c5e0e07ac6386657d843b4fba7901f7353e0bf962e97447450dbdc8\"" Oct 31 00:34:43.728868 containerd[1461]: time="2025-10-31T00:34:43.728810504Z" level=info msg="StartContainer for \"fdc9ef4b7c5e0e07ac6386657d843b4fba7901f7353e0bf962e97447450dbdc8\"" Oct 31 00:34:43.800986 systemd[1]: Started cri-containerd-fdc9ef4b7c5e0e07ac6386657d843b4fba7901f7353e0bf962e97447450dbdc8.scope - libcontainer container fdc9ef4b7c5e0e07ac6386657d843b4fba7901f7353e0bf962e97447450dbdc8. Oct 31 00:34:43.873908 systemd[1]: Started cri-containerd-2b1aa19501a8596848cc493bf6e61332f10c0e41cc49b5074a6494c91487361b.scope - libcontainer container 2b1aa19501a8596848cc493bf6e61332f10c0e41cc49b5074a6494c91487361b. Oct 31 00:34:43.880853 containerd[1461]: time="2025-10-31T00:34:43.880793102Z" level=info msg="StartContainer for \"e041a435e9d0af067295ae135c555bc23389ff9921b67a65cbd07b48ba24b819\" returns successfully" Oct 31 00:34:43.886283 kubelet[2144]: W1031 00:34:43.886226 2144 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused Oct 31 00:34:43.886380 kubelet[2144]: E1031 00:34:43.886297 2144 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" Oct 31 00:34:43.907301 containerd[1461]: time="2025-10-31T00:34:43.907248738Z" level=info msg="StartContainer for \"fdc9ef4b7c5e0e07ac6386657d843b4fba7901f7353e0bf962e97447450dbdc8\" returns successfully" Oct 31 00:34:43.923386 containerd[1461]: time="2025-10-31T00:34:43.923319367Z" level=info msg="StartContainer for \"2b1aa19501a8596848cc493bf6e61332f10c0e41cc49b5074a6494c91487361b\" returns successfully" Oct 31 00:34:44.096954 kubelet[2144]: E1031 00:34:44.096899 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:44.097129 kubelet[2144]: E1031 00:34:44.097060 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:44.099417 kubelet[2144]: E1031 00:34:44.099384 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:44.101619 kubelet[2144]: E1031 00:34:44.099489 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:44.102530 kubelet[2144]: E1031 00:34:44.102477 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:44.102738 kubelet[2144]: E1031 00:34:44.102708 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:44.408627 kubelet[2144]: I1031 00:34:44.406767 2144 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:34:45.105051 kubelet[2144]: E1031 00:34:45.104790 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:45.105051 kubelet[2144]: E1031 00:34:45.104913 2144 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:34:45.105051 kubelet[2144]: E1031 00:34:45.104948 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:45.105051 kubelet[2144]: E1031 00:34:45.105045 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:45.891866 kubelet[2144]: E1031 00:34:45.891706 2144 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 00:34:46.019543 kubelet[2144]: I1031 00:34:46.019341 2144 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:34:46.022785 kubelet[2144]: E1031 00:34:46.019828 2144 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:34:46.048200 kubelet[2144]: I1031 00:34:46.048151 2144 apiserver.go:52] "Watching apiserver" Oct 31 00:34:46.063947 kubelet[2144]: I1031 00:34:46.063893 2144 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:34:46.065097 kubelet[2144]: I1031 00:34:46.065006 2144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:46.071715 kubelet[2144]: E1031 00:34:46.071665 2144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:46.071715 kubelet[2144]: I1031 00:34:46.071707 2144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:46.073559 kubelet[2144]: E1031 00:34:46.073521 2144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:46.073559 kubelet[2144]: I1031 00:34:46.073547 2144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:46.075181 kubelet[2144]: E1031 00:34:46.075150 2144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:46.105250 kubelet[2144]: I1031 00:34:46.105203 2144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:46.107984 kubelet[2144]: E1031 00:34:46.107952 2144 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:46.108206 kubelet[2144]: E1031 00:34:46.108178 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:47.094718 kubelet[2144]: I1031 00:34:47.094634 2144 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:47.100307 kubelet[2144]: E1031 00:34:47.100272 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:47.107521 kubelet[2144]: E1031 00:34:47.107419 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:48.478836 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Oct 31 00:34:48.478857 systemd[1]: Reloading... Oct 31 00:34:48.572824 zram_generator::config[2463]: No configuration found. Oct 31 00:34:48.697946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:34:48.799733 systemd[1]: Reloading finished in 320 ms. Oct 31 00:34:48.856932 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:48.876319 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:34:48.877704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:48.877806 systemd[1]: kubelet.service: Consumed 1.652s CPU time, 136.6M memory peak, 0B memory swap peak. Oct 31 00:34:48.897021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:34:49.153995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:34:49.160526 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:34:49.216485 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:34:49.216485 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:34:49.216485 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:34:49.216977 kubelet[2505]: I1031 00:34:49.216535 2505 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:34:49.225362 kubelet[2505]: I1031 00:34:49.225313 2505 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 00:34:49.225362 kubelet[2505]: I1031 00:34:49.225340 2505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:34:49.226638 kubelet[2505]: I1031 00:34:49.226017 2505 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 00:34:49.227308 kubelet[2505]: I1031 00:34:49.227283 2505 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 00:34:49.230908 kubelet[2505]: I1031 00:34:49.230868 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:34:49.234633 kubelet[2505]: E1031 00:34:49.234561 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:34:49.234703 kubelet[2505]: I1031 00:34:49.234636 2505 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 00:34:49.240177 kubelet[2505]: I1031 00:34:49.240133 2505 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 00:34:49.240535 kubelet[2505]: I1031 00:34:49.240488 2505 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:34:49.240757 kubelet[2505]: I1031 00:34:49.240523 2505 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:34:49.240864 kubelet[2505]: I1031 00:34:49.240764 2505 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:34:49.240864 kubelet[2505]: I1031 00:34:49.240775 2505 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 00:34:49.240864 kubelet[2505]: I1031 00:34:49.240842 2505 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:34:49.241051 kubelet[2505]: I1031 00:34:49.241021 2505 kubelet.go:446] "Attempting to sync node with API server" Oct 31 00:34:49.241083 kubelet[2505]: I1031 00:34:49.241061 2505 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:34:49.241109 kubelet[2505]: I1031 00:34:49.241087 2505 kubelet.go:352] "Adding apiserver pod source" Oct 31 00:34:49.241109 kubelet[2505]: I1031 00:34:49.241102 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:34:49.243935 kubelet[2505]: I1031 00:34:49.242084 2505 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:34:49.243935 kubelet[2505]: I1031 00:34:49.243683 2505 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 00:34:49.244403 kubelet[2505]: I1031 00:34:49.244365 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 00:34:49.244442 kubelet[2505]: I1031 00:34:49.244419 2505 server.go:1287] "Started kubelet" Oct 31 00:34:49.245836 kubelet[2505]: I1031 00:34:49.245737 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:34:49.246401 kubelet[2505]: I1031 00:34:49.246364 2505 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:34:49.246521 kubelet[2505]: I1031 00:34:49.246462 2505 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:34:49.248558 kubelet[2505]: I1031 00:34:49.248526 2505 server.go:479] "Adding debug handlers to kubelet server" Oct 31 00:34:49.252893 kubelet[2505]: I1031 00:34:49.252859 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:34:49.254116 kubelet[2505]: I1031 00:34:49.253731 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:34:49.255025 kubelet[2505]: I1031 00:34:49.254854 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 00:34:49.255894 kubelet[2505]: I1031 00:34:49.255859 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 00:34:49.256735 kubelet[2505]: I1031 00:34:49.256331 2505 reconciler.go:26] "Reconciler: start to sync state" Oct 31 00:34:49.260243 kubelet[2505]: I1031 00:34:49.260195 2505 factory.go:221] Registration of the systemd container factory successfully Oct 31 00:34:49.260341 kubelet[2505]: I1031 00:34:49.260318 2505 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:34:49.261925 kubelet[2505]: E1031 00:34:49.261847 2505 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:34:49.262774 kubelet[2505]: I1031 00:34:49.262427 2505 factory.go:221] Registration of the containerd container factory successfully Oct 31 00:34:49.274826 kubelet[2505]: I1031 00:34:49.274751 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 00:34:49.276143 kubelet[2505]: I1031 00:34:49.276118 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 00:34:49.276267 kubelet[2505]: I1031 00:34:49.276248 2505 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 00:34:49.276692 kubelet[2505]: I1031 00:34:49.276648 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:34:49.278180 kubelet[2505]: I1031 00:34:49.276956 2505 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 00:34:49.278180 kubelet[2505]: E1031 00:34:49.277043 2505 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:34:49.303795 kubelet[2505]: I1031 00:34:49.303757 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:34:49.303795 kubelet[2505]: I1031 00:34:49.303782 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:34:49.303795 kubelet[2505]: I1031 00:34:49.303807 2505 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:34:49.304277 kubelet[2505]: I1031 00:34:49.303965 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:34:49.304277 kubelet[2505]: I1031 00:34:49.303984 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:34:49.304277 kubelet[2505]: I1031 00:34:49.304015 2505 policy_none.go:49] "None policy: Start" Oct 31 00:34:49.304277 kubelet[2505]: I1031 00:34:49.304032 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 00:34:49.304277 kubelet[2505]: I1031 00:34:49.304047 2505 state_mem.go:35] "Initializing new in-memory state store" Oct 31 00:34:49.304277 kubelet[2505]: I1031 00:34:49.304165 2505 state_mem.go:75] "Updated machine memory state" Oct 31 00:34:49.309051 kubelet[2505]: I1031 00:34:49.309003 2505 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 00:34:49.309338 kubelet[2505]: I1031 00:34:49.309309 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:34:49.309375 kubelet[2505]: I1031 00:34:49.309331 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:34:49.309761 kubelet[2505]: I1031 00:34:49.309739 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:34:49.311010 kubelet[2505]: E1031 00:34:49.310984 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:34:49.378332 kubelet[2505]: I1031 00:34:49.378279 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:49.378520 kubelet[2505]: I1031 00:34:49.378420 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:49.378545 kubelet[2505]: I1031 00:34:49.378513 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:49.385803 kubelet[2505]: E1031 00:34:49.385758 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:49.419716 kubelet[2505]: I1031 00:34:49.419580 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:34:49.426838 kubelet[2505]: I1031 00:34:49.426797 2505 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:34:49.426972 kubelet[2505]: I1031 00:34:49.426898 2505 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:34:49.458703 kubelet[2505]: I1031 00:34:49.458652 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:49.458703 kubelet[2505]: I1031 00:34:49.458695 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:49.458703 kubelet[2505]: I1031 00:34:49.458716 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:34:49.458908 kubelet[2505]: I1031 00:34:49.458732 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/467bb869cbaedfadeae519d778fcd2d5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"467bb869cbaedfadeae519d778fcd2d5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:49.458908 kubelet[2505]: I1031 00:34:49.458747 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:49.458908 kubelet[2505]: I1031 00:34:49.458763 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:49.458908 kubelet[2505]: I1031 00:34:49.458778 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:34:49.458908 kubelet[2505]: I1031 00:34:49.458796 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/467bb869cbaedfadeae519d778fcd2d5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"467bb869cbaedfadeae519d778fcd2d5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:49.459021 kubelet[2505]: I1031 00:34:49.458811 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/467bb869cbaedfadeae519d778fcd2d5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"467bb869cbaedfadeae519d778fcd2d5\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:34:49.685840 kubelet[2505]: E1031 00:34:49.685624 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:49.685840 kubelet[2505]: E1031 00:34:49.685708 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:49.686088 kubelet[2505]: E1031 00:34:49.686071 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:50.242880 kubelet[2505]: I1031 00:34:50.242810 2505 apiserver.go:52] "Watching apiserver" Oct 31 00:34:50.256719 kubelet[2505]: I1031 00:34:50.256184 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 00:34:50.291990 kubelet[2505]: E1031 00:34:50.291955 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:50.291990 kubelet[2505]: E1031 00:34:50.291984 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:50.292392 kubelet[2505]: E1031 00:34:50.292351 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:50.317718 kubelet[2505]: I1031 00:34:50.317648 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.31762487 podStartE2EDuration="1.31762487s" podCreationTimestamp="2025-10-31 00:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:34:50.317156615 +0000 UTC m=+1.151139393" watchObservedRunningTime="2025-10-31 00:34:50.31762487 +0000 UTC m=+1.151607648" Oct 31 00:34:50.332617 kubelet[2505]: I1031 00:34:50.332329 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.332287972 podStartE2EDuration="3.332287972s" podCreationTimestamp="2025-10-31 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:34:50.324642642 +0000 UTC m=+1.158625420" watchObservedRunningTime="2025-10-31 00:34:50.332287972 +0000 UTC m=+1.166270750" Oct 31 00:34:50.332617 kubelet[2505]: I1031 00:34:50.332498 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.332460682 podStartE2EDuration="1.332460682s" podCreationTimestamp="2025-10-31 00:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:34:50.332458418 +0000 UTC m=+1.166441186" watchObservedRunningTime="2025-10-31 00:34:50.332460682 +0000 UTC m=+1.166443450" Oct 31 00:34:51.293377 kubelet[2505]: E1031 00:34:51.293318 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:51.293377 kubelet[2505]: E1031 00:34:51.293345 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:52.295437 kubelet[2505]: E1031 00:34:52.295368 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:52.479621 kubelet[2505]: E1031 00:34:52.479560 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:54.580665 kubelet[2505]: I1031 00:34:54.580545 2505 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:34:54.581446 kubelet[2505]: I1031 00:34:54.581330 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:34:54.581520 containerd[1461]: time="2025-10-31T00:34:54.581016883Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:34:55.380788 systemd[1]: Created slice kubepods-besteffort-podb4b4d92d_cc32_4133_bbe8_638b7a5c287d.slice - libcontainer container kubepods-besteffort-podb4b4d92d_cc32_4133_bbe8_638b7a5c287d.slice. Oct 31 00:34:55.396184 kubelet[2505]: I1031 00:34:55.396083 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4b4d92d-cc32-4133-bbe8-638b7a5c287d-kube-proxy\") pod \"kube-proxy-2nj8t\" (UID: \"b4b4d92d-cc32-4133-bbe8-638b7a5c287d\") " pod="kube-system/kube-proxy-2nj8t" Oct 31 00:34:55.396184 kubelet[2505]: I1031 00:34:55.396187 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b4d92d-cc32-4133-bbe8-638b7a5c287d-lib-modules\") pod \"kube-proxy-2nj8t\" (UID: \"b4b4d92d-cc32-4133-bbe8-638b7a5c287d\") " pod="kube-system/kube-proxy-2nj8t" Oct 31 00:34:55.396367 kubelet[2505]: I1031 00:34:55.396215 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2cn8\" (UniqueName: \"kubernetes.io/projected/b4b4d92d-cc32-4133-bbe8-638b7a5c287d-kube-api-access-k2cn8\") pod \"kube-proxy-2nj8t\" (UID: \"b4b4d92d-cc32-4133-bbe8-638b7a5c287d\") " pod="kube-system/kube-proxy-2nj8t" Oct 31 00:34:55.396367 kubelet[2505]: I1031 00:34:55.396243 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b4d92d-cc32-4133-bbe8-638b7a5c287d-xtables-lock\") pod \"kube-proxy-2nj8t\" (UID: \"b4b4d92d-cc32-4133-bbe8-638b7a5c287d\") " pod="kube-system/kube-proxy-2nj8t" Oct 31 00:34:55.653404 systemd[1]: Created slice kubepods-besteffort-podbc0a1ab2_e222_4a0d_8df0_bc50912bd4f1.slice - libcontainer container kubepods-besteffort-podbc0a1ab2_e222_4a0d_8df0_bc50912bd4f1.slice. Oct 31 00:34:55.692573 kubelet[2505]: E1031 00:34:55.692470 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:55.693554 containerd[1461]: time="2025-10-31T00:34:55.693506151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nj8t,Uid:b4b4d92d-cc32-4133-bbe8-638b7a5c287d,Namespace:kube-system,Attempt:0,}" Oct 31 00:34:55.697952 kubelet[2505]: I1031 00:34:55.697906 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc0a1ab2-e222-4a0d-8df0-bc50912bd4f1-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gm5w4\" (UID: \"bc0a1ab2-e222-4a0d-8df0-bc50912bd4f1\") " pod="tigera-operator/tigera-operator-7dcd859c48-gm5w4" Oct 31 00:34:55.697952 kubelet[2505]: I1031 00:34:55.697957 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvv4\" (UniqueName: \"kubernetes.io/projected/bc0a1ab2-e222-4a0d-8df0-bc50912bd4f1-kube-api-access-9vvv4\") pod \"tigera-operator-7dcd859c48-gm5w4\" (UID: \"bc0a1ab2-e222-4a0d-8df0-bc50912bd4f1\") " pod="tigera-operator/tigera-operator-7dcd859c48-gm5w4" Oct 31 00:34:55.724295 containerd[1461]: time="2025-10-31T00:34:55.724167934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:34:55.724295 containerd[1461]: time="2025-10-31T00:34:55.724250831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:34:55.724295 containerd[1461]: time="2025-10-31T00:34:55.724264036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:55.731372 containerd[1461]: time="2025-10-31T00:34:55.724373885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:55.753970 systemd[1]: Started cri-containerd-eff92126c023640eb82a10c988cd56dd8b7da820e4ee1dc9199bcfe64a9f9980.scope - libcontainer container eff92126c023640eb82a10c988cd56dd8b7da820e4ee1dc9199bcfe64a9f9980. Oct 31 00:34:55.792465 containerd[1461]: time="2025-10-31T00:34:55.792390118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nj8t,Uid:b4b4d92d-cc32-4133-bbe8-638b7a5c287d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eff92126c023640eb82a10c988cd56dd8b7da820e4ee1dc9199bcfe64a9f9980\"" Oct 31 00:34:55.793424 kubelet[2505]: E1031 00:34:55.793392 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:55.796208 containerd[1461]: time="2025-10-31T00:34:55.796143393Z" level=info msg="CreateContainer within sandbox \"eff92126c023640eb82a10c988cd56dd8b7da820e4ee1dc9199bcfe64a9f9980\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:34:55.836317 containerd[1461]: time="2025-10-31T00:34:55.836230356Z" level=info msg="CreateContainer within sandbox \"eff92126c023640eb82a10c988cd56dd8b7da820e4ee1dc9199bcfe64a9f9980\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88c2e3747620c5dc178b27efe83fc37cf15cc8bd0c640217c8e3e2f08ad09d42\"" Oct 31 00:34:55.837138 containerd[1461]: time="2025-10-31T00:34:55.836998396Z" level=info msg="StartContainer for \"88c2e3747620c5dc178b27efe83fc37cf15cc8bd0c640217c8e3e2f08ad09d42\"" Oct 31 00:34:55.868821 systemd[1]: Started cri-containerd-88c2e3747620c5dc178b27efe83fc37cf15cc8bd0c640217c8e3e2f08ad09d42.scope - libcontainer container 88c2e3747620c5dc178b27efe83fc37cf15cc8bd0c640217c8e3e2f08ad09d42. Oct 31 00:34:55.910569 containerd[1461]: time="2025-10-31T00:34:55.910396831Z" level=info msg="StartContainer for \"88c2e3747620c5dc178b27efe83fc37cf15cc8bd0c640217c8e3e2f08ad09d42\" returns successfully" Oct 31 00:34:55.958263 containerd[1461]: time="2025-10-31T00:34:55.958188622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gm5w4,Uid:bc0a1ab2-e222-4a0d-8df0-bc50912bd4f1,Namespace:tigera-operator,Attempt:0,}" Oct 31 00:34:55.990698 containerd[1461]: time="2025-10-31T00:34:55.990348423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:34:55.990698 containerd[1461]: time="2025-10-31T00:34:55.990445568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:34:55.990698 containerd[1461]: time="2025-10-31T00:34:55.990499641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:55.990952 containerd[1461]: time="2025-10-31T00:34:55.990804401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:34:56.018902 systemd[1]: Started cri-containerd-a73b72b9d66d199a524d8132da8abdbb4ea58ce458064d610d53ef745b496d40.scope - libcontainer container a73b72b9d66d199a524d8132da8abdbb4ea58ce458064d610d53ef745b496d40. Oct 31 00:34:56.065795 containerd[1461]: time="2025-10-31T00:34:56.065662235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gm5w4,Uid:bc0a1ab2-e222-4a0d-8df0-bc50912bd4f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a73b72b9d66d199a524d8132da8abdbb4ea58ce458064d610d53ef745b496d40\"" Oct 31 00:34:56.069214 containerd[1461]: time="2025-10-31T00:34:56.069012730Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 00:34:56.219581 update_engine[1448]: I20251031 00:34:56.219389 1448 update_attempter.cc:509] Updating boot flags... Oct 31 00:34:56.258641 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2752) Oct 31 00:34:56.307398 kubelet[2505]: E1031 00:34:56.307325 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:56.327630 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2759) Oct 31 00:34:56.332633 kubelet[2505]: I1031 00:34:56.330678 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2nj8t" podStartSLOduration=1.33064471 podStartE2EDuration="1.33064471s" podCreationTimestamp="2025-10-31 00:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:34:56.323697523 +0000 UTC m=+7.157680301" watchObservedRunningTime="2025-10-31 00:34:56.33064471 +0000 UTC m=+7.164627498" Oct 31 00:34:56.855953 kubelet[2505]: E1031 00:34:56.855907 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:57.307821 kubelet[2505]: E1031 00:34:57.307765 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:34:57.605411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016149990.mount: Deactivated successfully. Oct 31 00:34:58.094358 containerd[1461]: time="2025-10-31T00:34:58.094280822Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:58.095237 containerd[1461]: time="2025-10-31T00:34:58.095126196Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 00:34:58.096644 containerd[1461]: time="2025-10-31T00:34:58.096567860Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:58.099063 containerd[1461]: time="2025-10-31T00:34:58.099032214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:34:58.099807 containerd[1461]: time="2025-10-31T00:34:58.099766386Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.030667432s" Oct 31 00:34:58.099874 containerd[1461]: time="2025-10-31T00:34:58.099812304Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 00:34:58.102106 containerd[1461]: time="2025-10-31T00:34:58.102076958Z" level=info msg="CreateContainer within sandbox \"a73b72b9d66d199a524d8132da8abdbb4ea58ce458064d610d53ef745b496d40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 00:34:58.115880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350166832.mount: Deactivated successfully. Oct 31 00:34:58.117296 containerd[1461]: time="2025-10-31T00:34:58.117232182Z" level=info msg="CreateContainer within sandbox \"a73b72b9d66d199a524d8132da8abdbb4ea58ce458064d610d53ef745b496d40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c2689cb1724f5003f1c2aa65c744478096fa8f69f8597fa617a2d42b932f725d\"" Oct 31 00:34:58.117948 containerd[1461]: time="2025-10-31T00:34:58.117908586Z" level=info msg="StartContainer for \"c2689cb1724f5003f1c2aa65c744478096fa8f69f8597fa617a2d42b932f725d\"" Oct 31 00:34:58.162812 systemd[1]: Started cri-containerd-c2689cb1724f5003f1c2aa65c744478096fa8f69f8597fa617a2d42b932f725d.scope - libcontainer container c2689cb1724f5003f1c2aa65c744478096fa8f69f8597fa617a2d42b932f725d. Oct 31 00:34:58.194990 containerd[1461]: time="2025-10-31T00:34:58.194941069Z" level=info msg="StartContainer for \"c2689cb1724f5003f1c2aa65c744478096fa8f69f8597fa617a2d42b932f725d\" returns successfully" Oct 31 00:35:01.558318 kubelet[2505]: E1031 00:35:01.558266 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:02.076495 kubelet[2505]: I1031 00:35:02.076330 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gm5w4" podStartSLOduration=5.043612236 podStartE2EDuration="7.076299953s" podCreationTimestamp="2025-10-31 00:34:55 +0000 UTC" firstStartedPulling="2025-10-31 00:34:56.068037688 +0000 UTC m=+6.902020466" lastFinishedPulling="2025-10-31 00:34:58.100725405 +0000 UTC m=+8.934708183" observedRunningTime="2025-10-31 00:34:58.321688573 +0000 UTC m=+9.155671351" watchObservedRunningTime="2025-10-31 00:35:02.076299953 +0000 UTC m=+12.910282721" Oct 31 00:35:02.323692 kubelet[2505]: E1031 00:35:02.323527 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:02.505945 kubelet[2505]: E1031 00:35:02.505034 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:03.324801 kubelet[2505]: E1031 00:35:03.324749 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:05.817592 sudo[1643]: pam_unix(sudo:session): session closed for user root Oct 31 00:35:05.825977 sshd[1640]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:05.836161 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:35:05.837377 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:38280.service: Deactivated successfully. Oct 31 00:35:05.841618 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:35:05.842170 systemd[1]: session-7.scope: Consumed 5.729s CPU time, 157.5M memory peak, 0B memory swap peak. Oct 31 00:35:05.848200 systemd-logind[1446]: Removed session 7. Oct 31 00:35:10.360336 systemd[1]: Created slice kubepods-besteffort-pod91e90d06_3f50_48db_b50d_742f460cc861.slice - libcontainer container kubepods-besteffort-pod91e90d06_3f50_48db_b50d_742f460cc861.slice. Oct 31 00:35:10.399684 kubelet[2505]: I1031 00:35:10.398154 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/91e90d06-3f50-48db-b50d-742f460cc861-typha-certs\") pod \"calico-typha-578969ccd6-cwmgq\" (UID: \"91e90d06-3f50-48db-b50d-742f460cc861\") " pod="calico-system/calico-typha-578969ccd6-cwmgq" Oct 31 00:35:10.400285 kubelet[2505]: I1031 00:35:10.400217 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91e90d06-3f50-48db-b50d-742f460cc861-tigera-ca-bundle\") pod \"calico-typha-578969ccd6-cwmgq\" (UID: \"91e90d06-3f50-48db-b50d-742f460cc861\") " pod="calico-system/calico-typha-578969ccd6-cwmgq" Oct 31 00:35:10.400285 kubelet[2505]: I1031 00:35:10.400274 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69lx7\" (UniqueName: \"kubernetes.io/projected/91e90d06-3f50-48db-b50d-742f460cc861-kube-api-access-69lx7\") pod \"calico-typha-578969ccd6-cwmgq\" (UID: \"91e90d06-3f50-48db-b50d-742f460cc861\") " pod="calico-system/calico-typha-578969ccd6-cwmgq" Oct 31 00:35:10.404059 kubelet[2505]: I1031 00:35:10.403991 2505 status_manager.go:890] "Failed to get status for pod" podUID="4055ae0d-0688-46da-b743-5e568d6abda6" pod="calico-system/calico-node-6hjtc" err="pods \"calico-node-6hjtc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Oct 31 00:35:10.404685 kubelet[2505]: W1031 00:35:10.404542 2505 reflector.go:569] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Oct 31 00:35:10.404685 kubelet[2505]: E1031 00:35:10.404631 2505 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 31 00:35:10.404829 kubelet[2505]: W1031 00:35:10.404726 2505 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Oct 31 00:35:10.404829 kubelet[2505]: E1031 00:35:10.404765 2505 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 31 00:35:10.409922 systemd[1]: Created slice kubepods-besteffort-pod4055ae0d_0688_46da_b743_5e568d6abda6.slice - libcontainer container kubepods-besteffort-pod4055ae0d_0688_46da_b743_5e568d6abda6.slice. Oct 31 00:35:10.500467 kubelet[2505]: I1031 00:35:10.500416 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-cni-net-dir\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500467 kubelet[2505]: I1031 00:35:10.500478 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-var-lib-calico\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500716 kubelet[2505]: I1031 00:35:10.500505 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-cni-log-dir\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500716 kubelet[2505]: I1031 00:35:10.500529 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-policysync\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500716 kubelet[2505]: I1031 00:35:10.500553 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-xtables-lock\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500716 kubelet[2505]: I1031 00:35:10.500594 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-flexvol-driver-host\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500716 kubelet[2505]: I1031 00:35:10.500649 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4055ae0d-0688-46da-b743-5e568d6abda6-node-certs\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500844 kubelet[2505]: I1031 00:35:10.500673 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-lib-modules\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500844 kubelet[2505]: I1031 00:35:10.500694 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwk8\" (UniqueName: \"kubernetes.io/projected/4055ae0d-0688-46da-b743-5e568d6abda6-kube-api-access-7cwk8\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500844 kubelet[2505]: I1031 00:35:10.500721 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-cni-bin-dir\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500844 kubelet[2505]: I1031 00:35:10.500773 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4055ae0d-0688-46da-b743-5e568d6abda6-tigera-ca-bundle\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.500844 kubelet[2505]: I1031 00:35:10.500794 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4055ae0d-0688-46da-b743-5e568d6abda6-var-run-calico\") pod \"calico-node-6hjtc\" (UID: \"4055ae0d-0688-46da-b743-5e568d6abda6\") " pod="calico-system/calico-node-6hjtc" Oct 31 00:35:10.524365 kubelet[2505]: E1031 00:35:10.524303 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:10.601817 kubelet[2505]: I1031 00:35:10.601754 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvpq\" (UniqueName: \"kubernetes.io/projected/7b2b437c-e155-49e7-bd08-33863840f302-kube-api-access-2xvpq\") pod \"csi-node-driver-hwgh9\" (UID: \"7b2b437c-e155-49e7-bd08-33863840f302\") " pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:10.602004 kubelet[2505]: I1031 00:35:10.601887 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7b2b437c-e155-49e7-bd08-33863840f302-registration-dir\") pod \"csi-node-driver-hwgh9\" (UID: \"7b2b437c-e155-49e7-bd08-33863840f302\") " pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:10.602004 kubelet[2505]: I1031 00:35:10.601912 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7b2b437c-e155-49e7-bd08-33863840f302-socket-dir\") pod \"csi-node-driver-hwgh9\" (UID: \"7b2b437c-e155-49e7-bd08-33863840f302\") " pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:10.602004 kubelet[2505]: I1031 00:35:10.601937 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b2b437c-e155-49e7-bd08-33863840f302-kubelet-dir\") pod \"csi-node-driver-hwgh9\" (UID: \"7b2b437c-e155-49e7-bd08-33863840f302\") " pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:10.602328 kubelet[2505]: I1031 00:35:10.602100 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7b2b437c-e155-49e7-bd08-33863840f302-varrun\") pod \"csi-node-driver-hwgh9\" (UID: \"7b2b437c-e155-49e7-bd08-33863840f302\") " pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:10.604738 kubelet[2505]: E1031 00:35:10.604714 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.604738 kubelet[2505]: W1031 00:35:10.604734 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.604829 kubelet[2505]: E1031 00:35:10.604774 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.605103 kubelet[2505]: E1031 00:35:10.605067 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.605103 kubelet[2505]: W1031 00:35:10.605100 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.605198 kubelet[2505]: E1031 00:35:10.605129 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.605438 kubelet[2505]: E1031 00:35:10.605411 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.605438 kubelet[2505]: W1031 00:35:10.605432 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.605526 kubelet[2505]: E1031 00:35:10.605443 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.605839 kubelet[2505]: E1031 00:35:10.605822 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.605839 kubelet[2505]: W1031 00:35:10.605836 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.606023 kubelet[2505]: E1031 00:35:10.605848 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.612513 kubelet[2505]: E1031 00:35:10.612431 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.612513 kubelet[2505]: W1031 00:35:10.612451 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.612513 kubelet[2505]: E1031 00:35:10.612466 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.667280 kubelet[2505]: E1031 00:35:10.667227 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:10.667930 containerd[1461]: time="2025-10-31T00:35:10.667890637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578969ccd6-cwmgq,Uid:91e90d06-3f50-48db-b50d-742f460cc861,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:10.705640 kubelet[2505]: E1031 00:35:10.703278 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.705640 kubelet[2505]: W1031 00:35:10.703303 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.705640 kubelet[2505]: E1031 00:35:10.703326 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.705864 kubelet[2505]: E1031 00:35:10.705724 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.705864 kubelet[2505]: W1031 00:35:10.705739 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.705864 kubelet[2505]: E1031 00:35:10.705754 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.708615 kubelet[2505]: E1031 00:35:10.706412 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.708615 kubelet[2505]: W1031 00:35:10.706428 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.708615 kubelet[2505]: E1031 00:35:10.706439 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.708615 kubelet[2505]: E1031 00:35:10.706761 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.708615 kubelet[2505]: W1031 00:35:10.706779 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.708615 kubelet[2505]: E1031 00:35:10.706794 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.711793 kubelet[2505]: E1031 00:35:10.711764 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.711923 kubelet[2505]: W1031 00:35:10.711905 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.712006 kubelet[2505]: E1031 00:35:10.711986 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.712798 kubelet[2505]: E1031 00:35:10.712775 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.712798 kubelet[2505]: W1031 00:35:10.712794 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.712880 kubelet[2505]: E1031 00:35:10.712809 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.713025 kubelet[2505]: E1031 00:35:10.713013 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.713025 kubelet[2505]: W1031 00:35:10.713023 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.713144 kubelet[2505]: E1031 00:35:10.713127 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.713221 kubelet[2505]: E1031 00:35:10.713204 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.713221 kubelet[2505]: W1031 00:35:10.713216 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.713312 kubelet[2505]: E1031 00:35:10.713296 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.713706 kubelet[2505]: E1031 00:35:10.713691 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.713778 kubelet[2505]: W1031 00:35:10.713766 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.713911 kubelet[2505]: E1031 00:35:10.713897 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.714137 kubelet[2505]: E1031 00:35:10.714124 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.714194 kubelet[2505]: W1031 00:35:10.714184 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.714314 kubelet[2505]: E1031 00:35:10.714292 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.714507 kubelet[2505]: E1031 00:35:10.714482 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.714507 kubelet[2505]: W1031 00:35:10.714493 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.714639 kubelet[2505]: E1031 00:35:10.714626 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.714901 kubelet[2505]: E1031 00:35:10.714875 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.714901 kubelet[2505]: W1031 00:35:10.714887 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.715048 kubelet[2505]: E1031 00:35:10.715031 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.715288 kubelet[2505]: E1031 00:35:10.715275 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.715368 kubelet[2505]: W1031 00:35:10.715355 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.715443 kubelet[2505]: E1031 00:35:10.715427 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.715663 kubelet[2505]: E1031 00:35:10.715650 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.715758 kubelet[2505]: W1031 00:35:10.715730 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.715869 kubelet[2505]: E1031 00:35:10.715842 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.716126 kubelet[2505]: E1031 00:35:10.716110 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.716299 kubelet[2505]: W1031 00:35:10.716192 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.716299 kubelet[2505]: E1031 00:35:10.716236 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.717014 kubelet[2505]: E1031 00:35:10.717000 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.717155 kubelet[2505]: W1031 00:35:10.717075 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.717195 kubelet[2505]: E1031 00:35:10.717157 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.717789 kubelet[2505]: E1031 00:35:10.717776 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.717936 kubelet[2505]: W1031 00:35:10.717849 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.718147 kubelet[2505]: E1031 00:35:10.718134 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.718897 kubelet[2505]: E1031 00:35:10.718782 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.718897 kubelet[2505]: W1031 00:35:10.718796 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.719638 kubelet[2505]: E1031 00:35:10.719564 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.719834 kubelet[2505]: E1031 00:35:10.719738 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.719834 kubelet[2505]: W1031 00:35:10.719750 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.719976 kubelet[2505]: E1031 00:35:10.719931 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.720248 kubelet[2505]: E1031 00:35:10.720098 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.720248 kubelet[2505]: W1031 00:35:10.720111 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.720248 kubelet[2505]: E1031 00:35:10.720203 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.720672 kubelet[2505]: E1031 00:35:10.720529 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.720672 kubelet[2505]: W1031 00:35:10.720540 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.720786 kubelet[2505]: E1031 00:35:10.720744 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.720889 kubelet[2505]: E1031 00:35:10.720878 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.720958 kubelet[2505]: W1031 00:35:10.720941 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.721070 kubelet[2505]: E1031 00:35:10.721017 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.721528 kubelet[2505]: E1031 00:35:10.721390 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.721528 kubelet[2505]: W1031 00:35:10.721506 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.722022 kubelet[2505]: E1031 00:35:10.721983 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.722516 kubelet[2505]: E1031 00:35:10.722363 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.722516 kubelet[2505]: W1031 00:35:10.722377 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.722516 kubelet[2505]: E1031 00:35:10.722391 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.722755 kubelet[2505]: E1031 00:35:10.722740 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.722821 kubelet[2505]: W1031 00:35:10.722808 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.722925 kubelet[2505]: E1031 00:35:10.722896 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.725103 containerd[1461]: time="2025-10-31T00:35:10.724955835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:10.725103 containerd[1461]: time="2025-10-31T00:35:10.725068808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:10.725103 containerd[1461]: time="2025-10-31T00:35:10.725084638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:10.725261 containerd[1461]: time="2025-10-31T00:35:10.725224301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:10.730749 kubelet[2505]: E1031 00:35:10.730702 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:10.730749 kubelet[2505]: W1031 00:35:10.730721 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:10.730749 kubelet[2505]: E1031 00:35:10.730738 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:10.744761 systemd[1]: Started cri-containerd-f6a32fe68b8c83874c20824ab684ca3004b60590753abf11f977be0614783672.scope - libcontainer container f6a32fe68b8c83874c20824ab684ca3004b60590753abf11f977be0614783672. Oct 31 00:35:10.784512 containerd[1461]: time="2025-10-31T00:35:10.784456544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578969ccd6-cwmgq,Uid:91e90d06-3f50-48db-b50d-742f460cc861,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6a32fe68b8c83874c20824ab684ca3004b60590753abf11f977be0614783672\"" Oct 31 00:35:10.785114 kubelet[2505]: E1031 00:35:10.785076 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:10.786098 containerd[1461]: time="2025-10-31T00:35:10.786077930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 00:35:11.237208 kubelet[2505]: E1031 00:35:11.237153 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:11.237208 kubelet[2505]: W1031 00:35:11.237189 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:11.237208 kubelet[2505]: E1031 00:35:11.237216 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:11.313574 kubelet[2505]: E1031 00:35:11.313518 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:11.314719 containerd[1461]: time="2025-10-31T00:35:11.314262185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6hjtc,Uid:4055ae0d-0688-46da-b743-5e568d6abda6,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:11.344141 containerd[1461]: time="2025-10-31T00:35:11.343839745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:11.344141 containerd[1461]: time="2025-10-31T00:35:11.343907383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:11.344141 containerd[1461]: time="2025-10-31T00:35:11.343924996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:11.344141 containerd[1461]: time="2025-10-31T00:35:11.344036827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:11.367885 systemd[1]: Started cri-containerd-971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1.scope - libcontainer container 971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1. Oct 31 00:35:11.400080 containerd[1461]: time="2025-10-31T00:35:11.400031261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6hjtc,Uid:4055ae0d-0688-46da-b743-5e568d6abda6,Namespace:calico-system,Attempt:0,} returns sandbox id \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\"" Oct 31 00:35:11.401242 kubelet[2505]: E1031 00:35:11.401197 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:12.277488 kubelet[2505]: E1031 00:35:12.277416 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:12.962069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993438128.mount: Deactivated successfully. Oct 31 00:35:13.891357 containerd[1461]: time="2025-10-31T00:35:13.891255526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:13.891959 containerd[1461]: time="2025-10-31T00:35:13.891861788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 00:35:13.893173 containerd[1461]: time="2025-10-31T00:35:13.893133092Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:13.895553 containerd[1461]: time="2025-10-31T00:35:13.895510469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:13.896488 containerd[1461]: time="2025-10-31T00:35:13.896453616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.110276749s" Oct 31 00:35:13.896488 containerd[1461]: time="2025-10-31T00:35:13.896481619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 00:35:13.904458 containerd[1461]: time="2025-10-31T00:35:13.903528401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 00:35:13.930994 containerd[1461]: time="2025-10-31T00:35:13.930908610Z" level=info msg="CreateContainer within sandbox \"f6a32fe68b8c83874c20824ab684ca3004b60590753abf11f977be0614783672\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 00:35:13.946885 containerd[1461]: time="2025-10-31T00:35:13.946830909Z" level=info msg="CreateContainer within sandbox \"f6a32fe68b8c83874c20824ab684ca3004b60590753abf11f977be0614783672\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"437a81665eaec6a395e41c098cfc10c77204aaf18688288a6d950cfbaf724f9e\"" Oct 31 00:35:13.950103 containerd[1461]: time="2025-10-31T00:35:13.950028281Z" level=info msg="StartContainer for \"437a81665eaec6a395e41c098cfc10c77204aaf18688288a6d950cfbaf724f9e\"" Oct 31 00:35:13.988767 systemd[1]: Started cri-containerd-437a81665eaec6a395e41c098cfc10c77204aaf18688288a6d950cfbaf724f9e.scope - libcontainer container 437a81665eaec6a395e41c098cfc10c77204aaf18688288a6d950cfbaf724f9e. Oct 31 00:35:14.037420 containerd[1461]: time="2025-10-31T00:35:14.037350589Z" level=info msg="StartContainer for \"437a81665eaec6a395e41c098cfc10c77204aaf18688288a6d950cfbaf724f9e\" returns successfully" Oct 31 00:35:14.289580 kubelet[2505]: E1031 00:35:14.289496 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:14.359042 kubelet[2505]: E1031 00:35:14.358056 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:14.422140 kubelet[2505]: E1031 00:35:14.422106 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.422140 kubelet[2505]: W1031 00:35:14.422127 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.423011 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.423325 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.424627 kubelet[2505]: W1031 00:35:14.423335 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.423346 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.423689 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.424627 kubelet[2505]: W1031 00:35:14.423711 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.423736 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.424039 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.424627 kubelet[2505]: W1031 00:35:14.424048 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424627 kubelet[2505]: E1031 00:35:14.424056 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.424984 kubelet[2505]: E1031 00:35:14.424262 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.424984 kubelet[2505]: W1031 00:35:14.424269 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424984 kubelet[2505]: E1031 00:35:14.424277 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.424984 kubelet[2505]: E1031 00:35:14.424451 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.424984 kubelet[2505]: W1031 00:35:14.424459 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424984 kubelet[2505]: E1031 00:35:14.424466 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.424984 kubelet[2505]: E1031 00:35:14.424691 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.424984 kubelet[2505]: W1031 00:35:14.424712 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.424984 kubelet[2505]: E1031 00:35:14.424721 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.425268 kubelet[2505]: E1031 00:35:14.425000 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.425268 kubelet[2505]: W1031 00:35:14.425015 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.425268 kubelet[2505]: E1031 00:35:14.425028 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.425339 kubelet[2505]: E1031 00:35:14.425308 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.425339 kubelet[2505]: W1031 00:35:14.425319 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.425339 kubelet[2505]: E1031 00:35:14.425329 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.425552 kubelet[2505]: E1031 00:35:14.425536 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.425552 kubelet[2505]: W1031 00:35:14.425546 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.425642 kubelet[2505]: E1031 00:35:14.425556 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.425779 kubelet[2505]: E1031 00:35:14.425767 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.425779 kubelet[2505]: W1031 00:35:14.425777 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.425826 kubelet[2505]: E1031 00:35:14.425787 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.426090 kubelet[2505]: E1031 00:35:14.426053 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.426090 kubelet[2505]: W1031 00:35:14.426070 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.426090 kubelet[2505]: E1031 00:35:14.426082 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.426463 kubelet[2505]: E1031 00:35:14.426440 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.426463 kubelet[2505]: W1031 00:35:14.426455 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.426463 kubelet[2505]: E1031 00:35:14.426465 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.426783 kubelet[2505]: E1031 00:35:14.426767 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.426783 kubelet[2505]: W1031 00:35:14.426779 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.426901 kubelet[2505]: E1031 00:35:14.426789 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.427085 kubelet[2505]: E1031 00:35:14.427069 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.427085 kubelet[2505]: W1031 00:35:14.427081 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.427157 kubelet[2505]: E1031 00:35:14.427102 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.437623 kubelet[2505]: E1031 00:35:14.437574 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.437623 kubelet[2505]: W1031 00:35:14.437594 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.437623 kubelet[2505]: E1031 00:35:14.437622 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.437959 kubelet[2505]: E1031 00:35:14.437923 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.437959 kubelet[2505]: W1031 00:35:14.437949 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.438015 kubelet[2505]: E1031 00:35:14.437975 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.438292 kubelet[2505]: E1031 00:35:14.438271 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.438292 kubelet[2505]: W1031 00:35:14.438287 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.438369 kubelet[2505]: E1031 00:35:14.438302 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.438533 kubelet[2505]: E1031 00:35:14.438515 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.438533 kubelet[2505]: W1031 00:35:14.438530 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.438615 kubelet[2505]: E1031 00:35:14.438546 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.438796 kubelet[2505]: E1031 00:35:14.438780 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.438796 kubelet[2505]: W1031 00:35:14.438792 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.438856 kubelet[2505]: E1031 00:35:14.438807 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.439053 kubelet[2505]: E1031 00:35:14.439029 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.439053 kubelet[2505]: W1031 00:35:14.439043 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.439097 kubelet[2505]: E1031 00:35:14.439055 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.439350 kubelet[2505]: E1031 00:35:14.439323 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.439350 kubelet[2505]: W1031 00:35:14.439336 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.439409 kubelet[2505]: E1031 00:35:14.439378 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.439626 kubelet[2505]: E1031 00:35:14.439583 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.439626 kubelet[2505]: W1031 00:35:14.439620 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.439779 kubelet[2505]: E1031 00:35:14.439727 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.439945 kubelet[2505]: E1031 00:35:14.439924 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.439945 kubelet[2505]: W1031 00:35:14.439939 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.440020 kubelet[2505]: E1031 00:35:14.439959 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.440683 kubelet[2505]: E1031 00:35:14.440272 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.440683 kubelet[2505]: W1031 00:35:14.440673 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.440683 kubelet[2505]: E1031 00:35:14.440689 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.440985 kubelet[2505]: E1031 00:35:14.440966 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.440985 kubelet[2505]: W1031 00:35:14.440982 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.441050 kubelet[2505]: E1031 00:35:14.440995 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.441222 kubelet[2505]: E1031 00:35:14.441207 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.441222 kubelet[2505]: W1031 00:35:14.441218 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.441295 kubelet[2505]: E1031 00:35:14.441229 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.441526 kubelet[2505]: E1031 00:35:14.441504 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.441526 kubelet[2505]: W1031 00:35:14.441519 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.441634 kubelet[2505]: E1031 00:35:14.441534 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.441872 kubelet[2505]: E1031 00:35:14.441856 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.441872 kubelet[2505]: W1031 00:35:14.441870 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.441948 kubelet[2505]: E1031 00:35:14.441882 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.442104 kubelet[2505]: E1031 00:35:14.442090 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.442104 kubelet[2505]: W1031 00:35:14.442102 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.442144 kubelet[2505]: E1031 00:35:14.442110 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.442332 kubelet[2505]: E1031 00:35:14.442321 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.442332 kubelet[2505]: W1031 00:35:14.442330 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.442377 kubelet[2505]: E1031 00:35:14.442344 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.442641 kubelet[2505]: E1031 00:35:14.442625 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.442641 kubelet[2505]: W1031 00:35:14.442639 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.442695 kubelet[2505]: E1031 00:35:14.442656 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.442872 kubelet[2505]: E1031 00:35:14.442857 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:14.442872 kubelet[2505]: W1031 00:35:14.442869 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:14.442933 kubelet[2505]: E1031 00:35:14.442877 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:14.504846 kubelet[2505]: I1031 00:35:14.504763 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-578969ccd6-cwmgq" podStartSLOduration=1.387203902 podStartE2EDuration="4.504729504s" podCreationTimestamp="2025-10-31 00:35:10 +0000 UTC" firstStartedPulling="2025-10-31 00:35:10.785820656 +0000 UTC m=+21.619803434" lastFinishedPulling="2025-10-31 00:35:13.903346258 +0000 UTC m=+24.737329036" observedRunningTime="2025-10-31 00:35:14.502483445 +0000 UTC m=+25.336466213" watchObservedRunningTime="2025-10-31 00:35:14.504729504 +0000 UTC m=+25.338712272" Oct 31 00:35:15.359010 kubelet[2505]: I1031 00:35:15.358957 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:35:15.359467 kubelet[2505]: E1031 00:35:15.359344 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:15.433634 kubelet[2505]: E1031 00:35:15.433575 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.433634 kubelet[2505]: W1031 00:35:15.433620 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.434049 kubelet[2505]: E1031 00:35:15.433649 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.434049 kubelet[2505]: E1031 00:35:15.433913 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.434049 kubelet[2505]: W1031 00:35:15.433924 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.434049 kubelet[2505]: E1031 00:35:15.433936 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.434181 kubelet[2505]: E1031 00:35:15.434158 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.434181 kubelet[2505]: W1031 00:35:15.434180 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.434241 kubelet[2505]: E1031 00:35:15.434192 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.434565 kubelet[2505]: E1031 00:35:15.434446 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.434565 kubelet[2505]: W1031 00:35:15.434457 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.434565 kubelet[2505]: E1031 00:35:15.434466 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.434783 kubelet[2505]: E1031 00:35:15.434745 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.434783 kubelet[2505]: W1031 00:35:15.434757 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.434783 kubelet[2505]: E1031 00:35:15.434768 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.435329 kubelet[2505]: E1031 00:35:15.435308 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.435329 kubelet[2505]: W1031 00:35:15.435322 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.435481 kubelet[2505]: E1031 00:35:15.435336 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.435742 kubelet[2505]: E1031 00:35:15.435666 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.435742 kubelet[2505]: W1031 00:35:15.435685 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.435831 kubelet[2505]: E1031 00:35:15.435698 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.436189 kubelet[2505]: E1031 00:35:15.436154 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.436189 kubelet[2505]: W1031 00:35:15.436172 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.436189 kubelet[2505]: E1031 00:35:15.436183 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.436703 kubelet[2505]: E1031 00:35:15.436677 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.436703 kubelet[2505]: W1031 00:35:15.436694 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.436815 kubelet[2505]: E1031 00:35:15.436707 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.438216 kubelet[2505]: E1031 00:35:15.438194 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.438216 kubelet[2505]: W1031 00:35:15.438212 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.438294 kubelet[2505]: E1031 00:35:15.438223 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.438463 kubelet[2505]: E1031 00:35:15.438450 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.438463 kubelet[2505]: W1031 00:35:15.438462 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.438521 kubelet[2505]: E1031 00:35:15.438471 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.438756 kubelet[2505]: E1031 00:35:15.438732 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.438756 kubelet[2505]: W1031 00:35:15.438744 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.438821 kubelet[2505]: E1031 00:35:15.438764 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.439054 kubelet[2505]: E1031 00:35:15.439037 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.439054 kubelet[2505]: W1031 00:35:15.439051 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.439118 kubelet[2505]: E1031 00:35:15.439060 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.439275 kubelet[2505]: E1031 00:35:15.439259 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.439275 kubelet[2505]: W1031 00:35:15.439272 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.439398 kubelet[2505]: E1031 00:35:15.439281 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.439475 kubelet[2505]: E1031 00:35:15.439461 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.439475 kubelet[2505]: W1031 00:35:15.439472 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.439555 kubelet[2505]: E1031 00:35:15.439480 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.445098 kubelet[2505]: E1031 00:35:15.445058 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.445098 kubelet[2505]: W1031 00:35:15.445090 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.445186 kubelet[2505]: E1031 00:35:15.445114 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.445374 kubelet[2505]: E1031 00:35:15.445358 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.445374 kubelet[2505]: W1031 00:35:15.445369 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.445432 kubelet[2505]: E1031 00:35:15.445385 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.445667 kubelet[2505]: E1031 00:35:15.445650 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.445667 kubelet[2505]: W1031 00:35:15.445665 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.445743 kubelet[2505]: E1031 00:35:15.445679 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.446081 kubelet[2505]: E1031 00:35:15.446048 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.446081 kubelet[2505]: W1031 00:35:15.446068 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.446081 kubelet[2505]: E1031 00:35:15.446087 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.446386 kubelet[2505]: E1031 00:35:15.446360 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.446386 kubelet[2505]: W1031 00:35:15.446383 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.446578 kubelet[2505]: E1031 00:35:15.446409 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.446814 kubelet[2505]: E1031 00:35:15.446785 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.446814 kubelet[2505]: W1031 00:35:15.446803 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.446814 kubelet[2505]: E1031 00:35:15.446818 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.447826 kubelet[2505]: E1031 00:35:15.447803 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.447872 kubelet[2505]: W1031 00:35:15.447824 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.447872 kubelet[2505]: E1031 00:35:15.447856 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.448161 kubelet[2505]: E1031 00:35:15.448140 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.448210 kubelet[2505]: W1031 00:35:15.448161 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.448210 kubelet[2505]: E1031 00:35:15.448188 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.450502 kubelet[2505]: E1031 00:35:15.450475 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.450502 kubelet[2505]: W1031 00:35:15.450493 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.450584 kubelet[2505]: E1031 00:35:15.450511 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.450919 kubelet[2505]: E1031 00:35:15.450886 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.450919 kubelet[2505]: W1031 00:35:15.450916 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.450980 kubelet[2505]: E1031 00:35:15.450935 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.451135 kubelet[2505]: E1031 00:35:15.451122 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.451135 kubelet[2505]: W1031 00:35:15.451132 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.451212 kubelet[2505]: E1031 00:35:15.451152 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.451384 kubelet[2505]: E1031 00:35:15.451369 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.451384 kubelet[2505]: W1031 00:35:15.451381 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.451463 kubelet[2505]: E1031 00:35:15.451395 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.451743 kubelet[2505]: E1031 00:35:15.451729 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.451743 kubelet[2505]: W1031 00:35:15.451741 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.451822 kubelet[2505]: E1031 00:35:15.451757 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.452023 kubelet[2505]: E1031 00:35:15.452005 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.452023 kubelet[2505]: W1031 00:35:15.452020 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.452110 kubelet[2505]: E1031 00:35:15.452037 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.452306 kubelet[2505]: E1031 00:35:15.452289 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.452306 kubelet[2505]: W1031 00:35:15.452302 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.452392 kubelet[2505]: E1031 00:35:15.452313 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.453475 containerd[1461]: time="2025-10-31T00:35:15.453435724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:15.453837 kubelet[2505]: E1031 00:35:15.453612 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.453837 kubelet[2505]: W1031 00:35:15.453625 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.453837 kubelet[2505]: E1031 00:35:15.453645 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.453940 kubelet[2505]: E1031 00:35:15.453915 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.453940 kubelet[2505]: W1031 00:35:15.453933 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.454019 kubelet[2505]: E1031 00:35:15.453952 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.454225 kubelet[2505]: E1031 00:35:15.454201 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:35:15.454225 kubelet[2505]: W1031 00:35:15.454217 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:35:15.454225 kubelet[2505]: E1031 00:35:15.454230 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:35:15.455022 containerd[1461]: time="2025-10-31T00:35:15.454975533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 00:35:15.456075 containerd[1461]: time="2025-10-31T00:35:15.456045677Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:15.458312 containerd[1461]: time="2025-10-31T00:35:15.458272860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:15.458912 containerd[1461]: time="2025-10-31T00:35:15.458862920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.555275638s" Oct 31 00:35:15.458971 containerd[1461]: time="2025-10-31T00:35:15.458913546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 00:35:15.467588 containerd[1461]: time="2025-10-31T00:35:15.467552258Z" level=info msg="CreateContainer within sandbox \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 00:35:15.485167 containerd[1461]: time="2025-10-31T00:35:15.485112304Z" level=info msg="CreateContainer within sandbox \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524\"" Oct 31 00:35:15.485664 containerd[1461]: time="2025-10-31T00:35:15.485634718Z" level=info msg="StartContainer for \"424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524\"" Oct 31 00:35:15.528870 systemd[1]: Started cri-containerd-424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524.scope - libcontainer container 424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524. Oct 31 00:35:15.578779 systemd[1]: cri-containerd-424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524.scope: Deactivated successfully. Oct 31 00:35:15.881915 containerd[1461]: time="2025-10-31T00:35:15.881833728Z" level=info msg="StartContainer for \"424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524\" returns successfully" Oct 31 00:35:15.910998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524-rootfs.mount: Deactivated successfully. Oct 31 00:35:15.927192 containerd[1461]: time="2025-10-31T00:35:15.927070755Z" level=info msg="shim disconnected" id=424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524 namespace=k8s.io Oct 31 00:35:15.927192 containerd[1461]: time="2025-10-31T00:35:15.927176072Z" level=warning msg="cleaning up after shim disconnected" id=424e32d94bed2213c5434f67c995ad391424d48e32173565a98380d7b0521524 namespace=k8s.io Oct 31 00:35:15.927192 containerd[1461]: time="2025-10-31T00:35:15.927196642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:35:16.277853 kubelet[2505]: E1031 00:35:16.277801 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:16.363384 kubelet[2505]: E1031 00:35:16.363298 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:16.364963 containerd[1461]: time="2025-10-31T00:35:16.364912162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 00:35:18.277729 kubelet[2505]: E1031 00:35:18.277674 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:18.533908 kubelet[2505]: I1031 00:35:18.533709 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:35:18.534253 kubelet[2505]: E1031 00:35:18.534224 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:19.371253 kubelet[2505]: E1031 00:35:19.371199 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:19.790867 containerd[1461]: time="2025-10-31T00:35:19.790826536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:19.791741 containerd[1461]: time="2025-10-31T00:35:19.791704617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 00:35:19.793018 containerd[1461]: time="2025-10-31T00:35:19.792992790Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:19.795314 containerd[1461]: time="2025-10-31T00:35:19.795272589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:19.796094 containerd[1461]: time="2025-10-31T00:35:19.796057154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.431095429s" Oct 31 00:35:19.796171 containerd[1461]: time="2025-10-31T00:35:19.796095316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 00:35:19.798622 containerd[1461]: time="2025-10-31T00:35:19.798492986Z" level=info msg="CreateContainer within sandbox \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 00:35:19.816028 containerd[1461]: time="2025-10-31T00:35:19.815966942Z" level=info msg="CreateContainer within sandbox \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b\"" Oct 31 00:35:19.816875 containerd[1461]: time="2025-10-31T00:35:19.816843439Z" level=info msg="StartContainer for \"1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b\"" Oct 31 00:35:19.855768 systemd[1]: Started cri-containerd-1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b.scope - libcontainer container 1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b. Oct 31 00:35:19.906909 containerd[1461]: time="2025-10-31T00:35:19.906863354Z" level=info msg="StartContainer for \"1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b\" returns successfully" Oct 31 00:35:20.278120 kubelet[2505]: E1031 00:35:20.277988 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:20.375886 kubelet[2505]: E1031 00:35:20.375837 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:20.991303 systemd[1]: cri-containerd-1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b.scope: Deactivated successfully. Oct 31 00:35:21.020341 kubelet[2505]: I1031 00:35:21.020300 2505 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 00:35:21.027382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b-rootfs.mount: Deactivated successfully. Oct 31 00:35:21.053148 containerd[1461]: time="2025-10-31T00:35:21.053038004Z" level=info msg="shim disconnected" id=1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b namespace=k8s.io Oct 31 00:35:21.053148 containerd[1461]: time="2025-10-31T00:35:21.053147599Z" level=warning msg="cleaning up after shim disconnected" id=1178d6e10cd83c286f6c86fc4e4094baf760beef8e15bc729213aa5d5384af5b namespace=k8s.io Oct 31 00:35:21.053748 containerd[1461]: time="2025-10-31T00:35:21.053168859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:35:21.072394 systemd[1]: Created slice kubepods-besteffort-pod34cdfd35_dce3_49cb_bd9c_4e5cde095d40.slice - libcontainer container kubepods-besteffort-pod34cdfd35_dce3_49cb_bd9c_4e5cde095d40.slice. Oct 31 00:35:21.089392 kubelet[2505]: I1031 00:35:21.083444 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4eb1f60-118e-45dc-a64e-c81dd9882514-goldmane-ca-bundle\") pod \"goldmane-666569f655-w9csd\" (UID: \"e4eb1f60-118e-45dc-a64e-c81dd9882514\") " pod="calico-system/goldmane-666569f655-w9csd" Oct 31 00:35:21.090892 kubelet[2505]: I1031 00:35:21.090581 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4668b7ce-66a9-46c0-aadc-2d2b9be34740-config-volume\") pod \"coredns-668d6bf9bc-xb6ph\" (UID: \"4668b7ce-66a9-46c0-aadc-2d2b9be34740\") " pod="kube-system/coredns-668d6bf9bc-xb6ph" Oct 31 00:35:21.090892 kubelet[2505]: I1031 00:35:21.090667 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbp7t\" (UniqueName: \"kubernetes.io/projected/4668b7ce-66a9-46c0-aadc-2d2b9be34740-kube-api-access-xbp7t\") pod \"coredns-668d6bf9bc-xb6ph\" (UID: \"4668b7ce-66a9-46c0-aadc-2d2b9be34740\") " pod="kube-system/coredns-668d6bf9bc-xb6ph" Oct 31 00:35:21.090892 kubelet[2505]: I1031 00:35:21.090700 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d0df6fc-c714-44ff-8fdd-63dc2197c8ef-tigera-ca-bundle\") pod \"calico-kube-controllers-86f5ddbf58-crgcv\" (UID: \"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef\") " pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" Oct 31 00:35:21.090892 kubelet[2505]: I1031 00:35:21.090737 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdvl\" (UniqueName: \"kubernetes.io/projected/7d0df6fc-c714-44ff-8fdd-63dc2197c8ef-kube-api-access-jrdvl\") pod \"calico-kube-controllers-86f5ddbf58-crgcv\" (UID: \"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef\") " pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" Oct 31 00:35:21.090892 kubelet[2505]: I1031 00:35:21.090774 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4eb1f60-118e-45dc-a64e-c81dd9882514-config\") pod \"goldmane-666569f655-w9csd\" (UID: \"e4eb1f60-118e-45dc-a64e-c81dd9882514\") " pod="calico-system/goldmane-666569f655-w9csd" Oct 31 00:35:21.091105 kubelet[2505]: I1031 00:35:21.090809 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34cdfd35-dce3-49cb-bd9c-4e5cde095d40-calico-apiserver-certs\") pod \"calico-apiserver-65865f79c6-scbt9\" (UID: \"34cdfd35-dce3-49cb-bd9c-4e5cde095d40\") " pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" Oct 31 00:35:21.091105 kubelet[2505]: I1031 00:35:21.090841 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfr6\" (UniqueName: \"kubernetes.io/projected/34cdfd35-dce3-49cb-bd9c-4e5cde095d40-kube-api-access-6dfr6\") pod \"calico-apiserver-65865f79c6-scbt9\" (UID: \"34cdfd35-dce3-49cb-bd9c-4e5cde095d40\") " pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" Oct 31 00:35:21.091105 kubelet[2505]: I1031 00:35:21.090872 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e4eb1f60-118e-45dc-a64e-c81dd9882514-goldmane-key-pair\") pod \"goldmane-666569f655-w9csd\" (UID: \"e4eb1f60-118e-45dc-a64e-c81dd9882514\") " pod="calico-system/goldmane-666569f655-w9csd" Oct 31 00:35:21.091504 systemd[1]: Created slice kubepods-burstable-pod4668b7ce_66a9_46c0_aadc_2d2b9be34740.slice - libcontainer container kubepods-burstable-pod4668b7ce_66a9_46c0_aadc_2d2b9be34740.slice. Oct 31 00:35:21.104425 systemd[1]: Created slice kubepods-burstable-pode37cc948_78f1_4541_9003_234551988575.slice - libcontainer container kubepods-burstable-pode37cc948_78f1_4541_9003_234551988575.slice. Oct 31 00:35:21.113976 systemd[1]: Created slice kubepods-besteffort-pod7d0df6fc_c714_44ff_8fdd_63dc2197c8ef.slice - libcontainer container kubepods-besteffort-pod7d0df6fc_c714_44ff_8fdd_63dc2197c8ef.slice. Oct 31 00:35:21.124743 systemd[1]: Created slice kubepods-besteffort-pode4eb1f60_118e_45dc_a64e_c81dd9882514.slice - libcontainer container kubepods-besteffort-pode4eb1f60_118e_45dc_a64e_c81dd9882514.slice. Oct 31 00:35:21.133671 systemd[1]: Created slice kubepods-besteffort-pod72446beb_c185_42cd_b53b_e5d495ef180d.slice - libcontainer container kubepods-besteffort-pod72446beb_c185_42cd_b53b_e5d495ef180d.slice. Oct 31 00:35:21.140510 systemd[1]: Created slice kubepods-besteffort-podcee64e5a_057c_4a2f_b352_eb76f50e925c.slice - libcontainer container kubepods-besteffort-podcee64e5a_057c_4a2f_b352_eb76f50e925c.slice. Oct 31 00:35:21.191408 kubelet[2505]: I1031 00:35:21.191355 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cee64e5a-057c-4a2f-b352-eb76f50e925c-calico-apiserver-certs\") pod \"calico-apiserver-65865f79c6-hslc5\" (UID: \"cee64e5a-057c-4a2f-b352-eb76f50e925c\") " pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" Oct 31 00:35:21.191815 kubelet[2505]: I1031 00:35:21.191423 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db27p\" (UniqueName: \"kubernetes.io/projected/cee64e5a-057c-4a2f-b352-eb76f50e925c-kube-api-access-db27p\") pod \"calico-apiserver-65865f79c6-hslc5\" (UID: \"cee64e5a-057c-4a2f-b352-eb76f50e925c\") " pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" Oct 31 00:35:21.191815 kubelet[2505]: I1031 00:35:21.191533 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e37cc948-78f1-4541-9003-234551988575-config-volume\") pod \"coredns-668d6bf9bc-9q6g7\" (UID: \"e37cc948-78f1-4541-9003-234551988575\") " pod="kube-system/coredns-668d6bf9bc-9q6g7" Oct 31 00:35:21.191815 kubelet[2505]: I1031 00:35:21.191579 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69sj5\" (UniqueName: \"kubernetes.io/projected/72446beb-c185-42cd-b53b-e5d495ef180d-kube-api-access-69sj5\") pod \"whisker-f49d5b744-rm846\" (UID: \"72446beb-c185-42cd-b53b-e5d495ef180d\") " pod="calico-system/whisker-f49d5b744-rm846" Oct 31 00:35:21.191815 kubelet[2505]: I1031 00:35:21.191652 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4w8m\" (UniqueName: \"kubernetes.io/projected/e4eb1f60-118e-45dc-a64e-c81dd9882514-kube-api-access-w4w8m\") pod \"goldmane-666569f655-w9csd\" (UID: \"e4eb1f60-118e-45dc-a64e-c81dd9882514\") " pod="calico-system/goldmane-666569f655-w9csd" Oct 31 00:35:21.191815 kubelet[2505]: I1031 00:35:21.191720 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkr67\" (UniqueName: \"kubernetes.io/projected/e37cc948-78f1-4541-9003-234551988575-kube-api-access-zkr67\") pod \"coredns-668d6bf9bc-9q6g7\" (UID: \"e37cc948-78f1-4541-9003-234551988575\") " pod="kube-system/coredns-668d6bf9bc-9q6g7" Oct 31 00:35:21.192011 kubelet[2505]: I1031 00:35:21.191841 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-backend-key-pair\") pod \"whisker-f49d5b744-rm846\" (UID: \"72446beb-c185-42cd-b53b-e5d495ef180d\") " pod="calico-system/whisker-f49d5b744-rm846" Oct 31 00:35:21.192011 kubelet[2505]: I1031 00:35:21.191883 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-ca-bundle\") pod \"whisker-f49d5b744-rm846\" (UID: \"72446beb-c185-42cd-b53b-e5d495ef180d\") " pod="calico-system/whisker-f49d5b744-rm846" Oct 31 00:35:21.382748 kubelet[2505]: E1031 00:35:21.382697 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:21.384311 containerd[1461]: time="2025-10-31T00:35:21.384276772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-scbt9,Uid:34cdfd35-dce3-49cb-bd9c-4e5cde095d40,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:35:21.385112 containerd[1461]: time="2025-10-31T00:35:21.385071457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 00:35:21.399555 kubelet[2505]: E1031 00:35:21.399504 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:21.400199 containerd[1461]: time="2025-10-31T00:35:21.400155377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb6ph,Uid:4668b7ce-66a9-46c0-aadc-2d2b9be34740,Namespace:kube-system,Attempt:0,}" Oct 31 00:35:21.410714 kubelet[2505]: E1031 00:35:21.410678 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:21.411235 containerd[1461]: time="2025-10-31T00:35:21.411191226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q6g7,Uid:e37cc948-78f1-4541-9003-234551988575,Namespace:kube-system,Attempt:0,}" Oct 31 00:35:21.419581 containerd[1461]: time="2025-10-31T00:35:21.419522250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f5ddbf58-crgcv,Uid:7d0df6fc-c714-44ff-8fdd-63dc2197c8ef,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:21.428878 containerd[1461]: time="2025-10-31T00:35:21.428836563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-w9csd,Uid:e4eb1f60-118e-45dc-a64e-c81dd9882514,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:21.438246 containerd[1461]: time="2025-10-31T00:35:21.438194116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f49d5b744-rm846,Uid:72446beb-c185-42cd-b53b-e5d495ef180d,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:21.444192 containerd[1461]: time="2025-10-31T00:35:21.444153501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-hslc5,Uid:cee64e5a-057c-4a2f-b352-eb76f50e925c,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:35:21.554644 containerd[1461]: time="2025-10-31T00:35:21.554561458Z" level=error msg="Failed to destroy network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.561509 containerd[1461]: time="2025-10-31T00:35:21.561466250Z" level=error msg="encountered an error cleaning up failed sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.561735 containerd[1461]: time="2025-10-31T00:35:21.561675384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb6ph,Uid:4668b7ce-66a9-46c0-aadc-2d2b9be34740,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.562096 kubelet[2505]: E1031 00:35:21.562022 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.562263 kubelet[2505]: E1031 00:35:21.562237 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xb6ph" Oct 31 00:35:21.562294 kubelet[2505]: E1031 00:35:21.562270 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xb6ph" Oct 31 00:35:21.562590 kubelet[2505]: E1031 00:35:21.562347 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xb6ph_kube-system(4668b7ce-66a9-46c0-aadc-2d2b9be34740)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xb6ph_kube-system(4668b7ce-66a9-46c0-aadc-2d2b9be34740)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xb6ph" podUID="4668b7ce-66a9-46c0-aadc-2d2b9be34740" Oct 31 00:35:21.590404 containerd[1461]: time="2025-10-31T00:35:21.590232747Z" level=error msg="Failed to destroy network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.590881 containerd[1461]: time="2025-10-31T00:35:21.590856690Z" level=error msg="encountered an error cleaning up failed sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.591008 containerd[1461]: time="2025-10-31T00:35:21.590987145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q6g7,Uid:e37cc948-78f1-4541-9003-234551988575,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.591354 kubelet[2505]: E1031 00:35:21.591296 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.591416 kubelet[2505]: E1031 00:35:21.591384 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9q6g7" Oct 31 00:35:21.591416 kubelet[2505]: E1031 00:35:21.591407 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9q6g7" Oct 31 00:35:21.591476 kubelet[2505]: E1031 00:35:21.591454 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9q6g7_kube-system(e37cc948-78f1-4541-9003-234551988575)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9q6g7_kube-system(e37cc948-78f1-4541-9003-234551988575)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9q6g7" podUID="e37cc948-78f1-4541-9003-234551988575" Oct 31 00:35:21.603050 containerd[1461]: time="2025-10-31T00:35:21.603001765Z" level=error msg="Failed to destroy network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.604185 containerd[1461]: time="2025-10-31T00:35:21.603560185Z" level=error msg="encountered an error cleaning up failed sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.604185 containerd[1461]: time="2025-10-31T00:35:21.603620097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-scbt9,Uid:34cdfd35-dce3-49cb-bd9c-4e5cde095d40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.604310 kubelet[2505]: E1031 00:35:21.603833 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.604310 kubelet[2505]: E1031 00:35:21.603887 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" Oct 31 00:35:21.604310 kubelet[2505]: E1031 00:35:21.603910 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" Oct 31 00:35:21.604410 kubelet[2505]: E1031 00:35:21.604022 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65865f79c6-scbt9_calico-apiserver(34cdfd35-dce3-49cb-bd9c-4e5cde095d40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65865f79c6-scbt9_calico-apiserver(34cdfd35-dce3-49cb-bd9c-4e5cde095d40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:35:21.623211 containerd[1461]: time="2025-10-31T00:35:21.623054919Z" level=error msg="Failed to destroy network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.623799 containerd[1461]: time="2025-10-31T00:35:21.623689241Z" level=error msg="encountered an error cleaning up failed sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.623799 containerd[1461]: time="2025-10-31T00:35:21.623743353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f49d5b744-rm846,Uid:72446beb-c185-42cd-b53b-e5d495ef180d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.624446 kubelet[2505]: E1031 00:35:21.624087 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.624446 kubelet[2505]: E1031 00:35:21.624150 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f49d5b744-rm846" Oct 31 00:35:21.624446 kubelet[2505]: E1031 00:35:21.624179 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f49d5b744-rm846" Oct 31 00:35:21.624644 kubelet[2505]: E1031 00:35:21.624218 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f49d5b744-rm846_calico-system(72446beb-c185-42cd-b53b-e5d495ef180d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f49d5b744-rm846_calico-system(72446beb-c185-42cd-b53b-e5d495ef180d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f49d5b744-rm846" podUID="72446beb-c185-42cd-b53b-e5d495ef180d" Oct 31 00:35:21.634201 containerd[1461]: time="2025-10-31T00:35:21.634067113Z" level=error msg="Failed to destroy network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.635779 containerd[1461]: time="2025-10-31T00:35:21.635737884Z" level=error msg="encountered an error cleaning up failed sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.635837 containerd[1461]: time="2025-10-31T00:35:21.635806314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f5ddbf58-crgcv,Uid:7d0df6fc-c714-44ff-8fdd-63dc2197c8ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.636087 kubelet[2505]: E1031 00:35:21.636047 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.636152 kubelet[2505]: E1031 00:35:21.636111 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" Oct 31 00:35:21.636181 kubelet[2505]: E1031 00:35:21.636130 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" Oct 31 00:35:21.636220 kubelet[2505]: E1031 00:35:21.636194 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f5ddbf58-crgcv_calico-system(7d0df6fc-c714-44ff-8fdd-63dc2197c8ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f5ddbf58-crgcv_calico-system(7d0df6fc-c714-44ff-8fdd-63dc2197c8ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:35:21.648177 containerd[1461]: time="2025-10-31T00:35:21.648043342Z" level=error msg="Failed to destroy network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.648637 containerd[1461]: time="2025-10-31T00:35:21.648549803Z" level=error msg="encountered an error cleaning up failed sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.648637 containerd[1461]: time="2025-10-31T00:35:21.648626958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-hslc5,Uid:cee64e5a-057c-4a2f-b352-eb76f50e925c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.648911 kubelet[2505]: E1031 00:35:21.648875 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.648959 kubelet[2505]: E1031 00:35:21.648932 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" Oct 31 00:35:21.648959 kubelet[2505]: E1031 00:35:21.648953 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" Oct 31 00:35:21.649029 kubelet[2505]: E1031 00:35:21.648991 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65865f79c6-hslc5_calico-apiserver(cee64e5a-057c-4a2f-b352-eb76f50e925c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65865f79c6-hslc5_calico-apiserver(cee64e5a-057c-4a2f-b352-eb76f50e925c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:35:21.657830 containerd[1461]: time="2025-10-31T00:35:21.657783775Z" level=error msg="Failed to destroy network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.658150 containerd[1461]: time="2025-10-31T00:35:21.658122872Z" level=error msg="encountered an error cleaning up failed sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.658196 containerd[1461]: time="2025-10-31T00:35:21.658167487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-w9csd,Uid:e4eb1f60-118e-45dc-a64e-c81dd9882514,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.658462 kubelet[2505]: E1031 00:35:21.658411 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:21.658525 kubelet[2505]: E1031 00:35:21.658475 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-w9csd" Oct 31 00:35:21.658525 kubelet[2505]: E1031 00:35:21.658498 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-w9csd" Oct 31 00:35:21.658794 kubelet[2505]: E1031 00:35:21.658541 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-w9csd_calico-system(e4eb1f60-118e-45dc-a64e-c81dd9882514)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-w9csd_calico-system(e4eb1f60-118e-45dc-a64e-c81dd9882514)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:35:22.284668 systemd[1]: Created slice kubepods-besteffort-pod7b2b437c_e155_49e7_bd08_33863840f302.slice - libcontainer container kubepods-besteffort-pod7b2b437c_e155_49e7_bd08_33863840f302.slice. Oct 31 00:35:22.287005 containerd[1461]: time="2025-10-31T00:35:22.286960424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwgh9,Uid:7b2b437c-e155-49e7-bd08-33863840f302,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:22.381950 containerd[1461]: time="2025-10-31T00:35:22.349950268Z" level=error msg="Failed to destroy network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.382331 containerd[1461]: time="2025-10-31T00:35:22.382290213Z" level=error msg="encountered an error cleaning up failed sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.382411 containerd[1461]: time="2025-10-31T00:35:22.382344326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwgh9,Uid:7b2b437c-e155-49e7-bd08-33863840f302,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.382685 kubelet[2505]: E1031 00:35:22.382553 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.382685 kubelet[2505]: E1031 00:35:22.382641 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:22.382685 kubelet[2505]: E1031 00:35:22.382663 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hwgh9" Oct 31 00:35:22.383912 kubelet[2505]: E1031 00:35:22.382700 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:22.384430 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794-shm.mount: Deactivated successfully. Oct 31 00:35:22.385918 kubelet[2505]: I1031 00:35:22.385756 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:22.386865 kubelet[2505]: I1031 00:35:22.386828 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:22.388398 kubelet[2505]: I1031 00:35:22.388377 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:22.390037 kubelet[2505]: I1031 00:35:22.390013 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:22.417629 containerd[1461]: time="2025-10-31T00:35:22.415268379Z" level=info msg="StopPodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\"" Oct 31 00:35:22.417629 containerd[1461]: time="2025-10-31T00:35:22.415844571Z" level=info msg="StopPodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\"" Oct 31 00:35:22.417629 containerd[1461]: time="2025-10-31T00:35:22.416487520Z" level=info msg="StopPodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\"" Oct 31 00:35:22.417629 containerd[1461]: time="2025-10-31T00:35:22.417525612Z" level=info msg="StopPodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\"" Oct 31 00:35:22.417891 kubelet[2505]: I1031 00:35:22.415877 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:22.418485 containerd[1461]: time="2025-10-31T00:35:22.418456291Z" level=info msg="Ensure that sandbox 38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140 in task-service has been cleanup successfully" Oct 31 00:35:22.418544 containerd[1461]: time="2025-10-31T00:35:22.418473764Z" level=info msg="Ensure that sandbox 71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6 in task-service has been cleanup successfully" Oct 31 00:35:22.418644 containerd[1461]: time="2025-10-31T00:35:22.418593229Z" level=info msg="Ensure that sandbox 2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794 in task-service has been cleanup successfully" Oct 31 00:35:22.419039 containerd[1461]: time="2025-10-31T00:35:22.418482350Z" level=info msg="Ensure that sandbox 163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409 in task-service has been cleanup successfully" Oct 31 00:35:22.427836 containerd[1461]: time="2025-10-31T00:35:22.418622874Z" level=info msg="StopPodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\"" Oct 31 00:35:22.428651 kubelet[2505]: I1031 00:35:22.428625 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:22.429209 containerd[1461]: time="2025-10-31T00:35:22.428936082Z" level=info msg="Ensure that sandbox c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961 in task-service has been cleanup successfully" Oct 31 00:35:22.431016 containerd[1461]: time="2025-10-31T00:35:22.430843437Z" level=info msg="StopPodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\"" Oct 31 00:35:22.431405 containerd[1461]: time="2025-10-31T00:35:22.431375047Z" level=info msg="Ensure that sandbox caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f in task-service has been cleanup successfully" Oct 31 00:35:22.434176 kubelet[2505]: I1031 00:35:22.434141 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:22.436079 containerd[1461]: time="2025-10-31T00:35:22.434716839Z" level=info msg="StopPodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\"" Oct 31 00:35:22.436079 containerd[1461]: time="2025-10-31T00:35:22.434914992Z" level=info msg="Ensure that sandbox 9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603 in task-service has been cleanup successfully" Oct 31 00:35:22.437040 kubelet[2505]: I1031 00:35:22.437011 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:22.442406 containerd[1461]: time="2025-10-31T00:35:22.440820694Z" level=info msg="StopPodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\"" Oct 31 00:35:22.442406 containerd[1461]: time="2025-10-31T00:35:22.441048522Z" level=info msg="Ensure that sandbox 1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989 in task-service has been cleanup successfully" Oct 31 00:35:22.478947 containerd[1461]: time="2025-10-31T00:35:22.478883427Z" level=error msg="StopPodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" failed" error="failed to destroy network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.480191 kubelet[2505]: E1031 00:35:22.480134 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:22.480272 kubelet[2505]: E1031 00:35:22.480219 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794"} Oct 31 00:35:22.480330 kubelet[2505]: E1031 00:35:22.480291 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b2b437c-e155-49e7-bd08-33863840f302\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.480330 kubelet[2505]: E1031 00:35:22.480321 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b2b437c-e155-49e7-bd08-33863840f302\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:22.493682 containerd[1461]: time="2025-10-31T00:35:22.493521565Z" level=error msg="StopPodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" failed" error="failed to destroy network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.493954 kubelet[2505]: E1031 00:35:22.493918 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:22.494028 kubelet[2505]: E1031 00:35:22.493996 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961"} Oct 31 00:35:22.494101 kubelet[2505]: E1031 00:35:22.494029 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4eb1f60-118e-45dc-a64e-c81dd9882514\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.494101 kubelet[2505]: E1031 00:35:22.494049 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4eb1f60-118e-45dc-a64e-c81dd9882514\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:35:22.499624 containerd[1461]: time="2025-10-31T00:35:22.497866333Z" level=error msg="StopPodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" failed" error="failed to destroy network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.499792 kubelet[2505]: E1031 00:35:22.498575 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:22.499792 kubelet[2505]: E1031 00:35:22.498639 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409"} Oct 31 00:35:22.499792 kubelet[2505]: E1031 00:35:22.498673 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34cdfd35-dce3-49cb-bd9c-4e5cde095d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.499792 kubelet[2505]: E1031 00:35:22.498710 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34cdfd35-dce3-49cb-bd9c-4e5cde095d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:35:22.502448 containerd[1461]: time="2025-10-31T00:35:22.502386200Z" level=error msg="StopPodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" failed" error="failed to destroy network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.502845 kubelet[2505]: E1031 00:35:22.502800 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:22.502971 kubelet[2505]: E1031 00:35:22.502863 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6"} Oct 31 00:35:22.502971 kubelet[2505]: E1031 00:35:22.502908 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4668b7ce-66a9-46c0-aadc-2d2b9be34740\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.502971 kubelet[2505]: E1031 00:35:22.502939 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4668b7ce-66a9-46c0-aadc-2d2b9be34740\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xb6ph" podUID="4668b7ce-66a9-46c0-aadc-2d2b9be34740" Oct 31 00:35:22.505593 containerd[1461]: time="2025-10-31T00:35:22.505533236Z" level=error msg="StopPodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" failed" error="failed to destroy network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.506059 kubelet[2505]: E1031 00:35:22.506020 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:22.506166 kubelet[2505]: E1031 00:35:22.506065 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140"} Oct 31 00:35:22.506166 kubelet[2505]: E1031 00:35:22.506112 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e37cc948-78f1-4541-9003-234551988575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.506166 kubelet[2505]: E1031 00:35:22.506141 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e37cc948-78f1-4541-9003-234551988575\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9q6g7" podUID="e37cc948-78f1-4541-9003-234551988575" Oct 31 00:35:22.512708 containerd[1461]: time="2025-10-31T00:35:22.512645265Z" level=error msg="StopPodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" failed" error="failed to destroy network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.513086 kubelet[2505]: E1031 00:35:22.513041 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:22.513189 kubelet[2505]: E1031 00:35:22.513106 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989"} Oct 31 00:35:22.513189 kubelet[2505]: E1031 00:35:22.513167 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72446beb-c185-42cd-b53b-e5d495ef180d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.513304 kubelet[2505]: E1031 00:35:22.513209 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72446beb-c185-42cd-b53b-e5d495ef180d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f49d5b744-rm846" podUID="72446beb-c185-42cd-b53b-e5d495ef180d" Oct 31 00:35:22.515187 containerd[1461]: time="2025-10-31T00:35:22.515138802Z" level=error msg="StopPodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" failed" error="failed to destroy network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.515696 kubelet[2505]: E1031 00:35:22.515622 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:22.515696 kubelet[2505]: E1031 00:35:22.515672 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603"} Oct 31 00:35:22.515819 kubelet[2505]: E1031 00:35:22.515711 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cee64e5a-057c-4a2f-b352-eb76f50e925c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.515819 kubelet[2505]: E1031 00:35:22.515751 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cee64e5a-057c-4a2f-b352-eb76f50e925c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:35:22.516406 containerd[1461]: time="2025-10-31T00:35:22.516359167Z" level=error msg="StopPodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" failed" error="failed to destroy network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:35:22.516699 kubelet[2505]: E1031 00:35:22.516634 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:22.516699 kubelet[2505]: E1031 00:35:22.516667 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f"} Oct 31 00:35:22.516699 kubelet[2505]: E1031 00:35:22.516696 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:35:22.516869 kubelet[2505]: E1031 00:35:22.516715 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:35:29.334163 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:45194.service - OpenSSH per-connection server daemon (10.0.0.1:45194). Oct 31 00:35:29.429408 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 45194 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:29.430589 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:29.438452 systemd-logind[1446]: New session 8 of user core. Oct 31 00:35:29.443953 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 00:35:29.673236 sshd[3735]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:29.679062 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:45194.service: Deactivated successfully. Oct 31 00:35:29.683633 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:35:29.686723 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:35:29.689412 systemd-logind[1446]: Removed session 8. Oct 31 00:35:30.017489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185925225.mount: Deactivated successfully. Oct 31 00:35:31.889397 containerd[1461]: time="2025-10-31T00:35:31.889281561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:31.890402 containerd[1461]: time="2025-10-31T00:35:31.890333958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 00:35:31.891913 containerd[1461]: time="2025-10-31T00:35:31.891854454Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:31.895178 containerd[1461]: time="2025-10-31T00:35:31.895119976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:35:31.896210 containerd[1461]: time="2025-10-31T00:35:31.896146033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.51103436s" Oct 31 00:35:31.896210 containerd[1461]: time="2025-10-31T00:35:31.896195816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 00:35:31.907422 containerd[1461]: time="2025-10-31T00:35:31.907372144Z" level=info msg="CreateContainer within sandbox \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 00:35:31.995519 containerd[1461]: time="2025-10-31T00:35:31.995449673Z" level=info msg="CreateContainer within sandbox \"971aa15c7e59305614c93f30e157a056ed61d7b46ba3e512bda6906f839ccab1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a\"" Oct 31 00:35:31.996303 containerd[1461]: time="2025-10-31T00:35:31.996242863Z" level=info msg="StartContainer for \"8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a\"" Oct 31 00:35:32.065055 systemd[1]: Started cri-containerd-8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a.scope - libcontainer container 8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a. Oct 31 00:35:32.286917 containerd[1461]: time="2025-10-31T00:35:32.286840336Z" level=info msg="StartContainer for \"8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a\" returns successfully" Oct 31 00:35:32.315232 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 00:35:32.315428 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 00:35:32.404492 containerd[1461]: time="2025-10-31T00:35:32.404422314Z" level=info msg="StopPodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\"" Oct 31 00:35:32.521449 kubelet[2505]: E1031 00:35:32.520896 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:32.551296 kubelet[2505]: I1031 00:35:32.548925 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6hjtc" podStartSLOduration=2.05357706 podStartE2EDuration="22.548902057s" podCreationTimestamp="2025-10-31 00:35:10 +0000 UTC" firstStartedPulling="2025-10-31 00:35:11.401714123 +0000 UTC m=+22.235696901" lastFinishedPulling="2025-10-31 00:35:31.89703911 +0000 UTC m=+42.731021898" observedRunningTime="2025-10-31 00:35:32.548626982 +0000 UTC m=+43.382609790" watchObservedRunningTime="2025-10-31 00:35:32.548902057 +0000 UTC m=+43.382884835" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.504 [INFO][3816] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.504 [INFO][3816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" iface="eth0" netns="/var/run/netns/cni-e63429e5-b364-b692-7af8-4c7e0be253ab" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.505 [INFO][3816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" iface="eth0" netns="/var/run/netns/cni-e63429e5-b364-b692-7af8-4c7e0be253ab" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.508 [INFO][3816] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" iface="eth0" netns="/var/run/netns/cni-e63429e5-b364-b692-7af8-4c7e0be253ab" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.508 [INFO][3816] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.508 [INFO][3816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.604 [INFO][3826] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.606 [INFO][3826] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.606 [INFO][3826] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.615 [WARNING][3826] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.615 [INFO][3826] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.620 [INFO][3826] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:32.628991 containerd[1461]: 2025-10-31 00:35:32.625 [INFO][3816] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:32.630451 containerd[1461]: time="2025-10-31T00:35:32.629121188Z" level=info msg="TearDown network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" successfully" Oct 31 00:35:32.630451 containerd[1461]: time="2025-10-31T00:35:32.629154450Z" level=info msg="StopPodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" returns successfully" Oct 31 00:35:32.674166 kubelet[2505]: I1031 00:35:32.672977 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-backend-key-pair\") pod \"72446beb-c185-42cd-b53b-e5d495ef180d\" (UID: \"72446beb-c185-42cd-b53b-e5d495ef180d\") " Oct 31 00:35:32.674166 kubelet[2505]: I1031 00:35:32.673050 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69sj5\" (UniqueName: \"kubernetes.io/projected/72446beb-c185-42cd-b53b-e5d495ef180d-kube-api-access-69sj5\") pod \"72446beb-c185-42cd-b53b-e5d495ef180d\" (UID: \"72446beb-c185-42cd-b53b-e5d495ef180d\") " Oct 31 00:35:32.674166 kubelet[2505]: I1031 00:35:32.673097 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-ca-bundle\") pod \"72446beb-c185-42cd-b53b-e5d495ef180d\" (UID: \"72446beb-c185-42cd-b53b-e5d495ef180d\") " Oct 31 00:35:32.674166 kubelet[2505]: I1031 00:35:32.673759 2505 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "72446beb-c185-42cd-b53b-e5d495ef180d" (UID: "72446beb-c185-42cd-b53b-e5d495ef180d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:35:32.678499 kubelet[2505]: I1031 00:35:32.678399 2505 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "72446beb-c185-42cd-b53b-e5d495ef180d" (UID: "72446beb-c185-42cd-b53b-e5d495ef180d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:35:32.678808 kubelet[2505]: I1031 00:35:32.678779 2505 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72446beb-c185-42cd-b53b-e5d495ef180d-kube-api-access-69sj5" (OuterVolumeSpecName: "kube-api-access-69sj5") pod "72446beb-c185-42cd-b53b-e5d495ef180d" (UID: "72446beb-c185-42cd-b53b-e5d495ef180d"). InnerVolumeSpecName "kube-api-access-69sj5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:35:32.773753 kubelet[2505]: I1031 00:35:32.773673 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69sj5\" (UniqueName: \"kubernetes.io/projected/72446beb-c185-42cd-b53b-e5d495ef180d-kube-api-access-69sj5\") on node \"localhost\" DevicePath \"\"" Oct 31 00:35:32.773753 kubelet[2505]: I1031 00:35:32.773727 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 00:35:32.773753 kubelet[2505]: I1031 00:35:32.773743 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72446beb-c185-42cd-b53b-e5d495ef180d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 00:35:32.905924 systemd[1]: run-netns-cni\x2de63429e5\x2db364\x2db692\x2d7af8\x2d4c7e0be253ab.mount: Deactivated successfully. Oct 31 00:35:32.906075 systemd[1]: var-lib-kubelet-pods-72446beb\x2dc185\x2d42cd\x2db53b\x2de5d495ef180d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69sj5.mount: Deactivated successfully. Oct 31 00:35:32.906178 systemd[1]: var-lib-kubelet-pods-72446beb\x2dc185\x2d42cd\x2db53b\x2de5d495ef180d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 00:35:33.281119 containerd[1461]: time="2025-10-31T00:35:33.281076391Z" level=info msg="StopPodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\"" Oct 31 00:35:33.289919 systemd[1]: Removed slice kubepods-besteffort-pod72446beb_c185_42cd_b53b_e5d495ef180d.slice - libcontainer container kubepods-besteffort-pod72446beb_c185_42cd_b53b_e5d495ef180d.slice. Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.330 [INFO][3882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.330 [INFO][3882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" iface="eth0" netns="/var/run/netns/cni-6b944a63-b7ef-b365-72de-a71be54de99c" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.331 [INFO][3882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" iface="eth0" netns="/var/run/netns/cni-6b944a63-b7ef-b365-72de-a71be54de99c" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.331 [INFO][3882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" iface="eth0" netns="/var/run/netns/cni-6b944a63-b7ef-b365-72de-a71be54de99c" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.331 [INFO][3882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.331 [INFO][3882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.355 [INFO][3891] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.356 [INFO][3891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.356 [INFO][3891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.364 [WARNING][3891] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.364 [INFO][3891] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.365 [INFO][3891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:33.372777 containerd[1461]: 2025-10-31 00:35:33.369 [INFO][3882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:33.373246 containerd[1461]: time="2025-10-31T00:35:33.372987612Z" level=info msg="TearDown network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" successfully" Oct 31 00:35:33.373246 containerd[1461]: time="2025-10-31T00:35:33.373019121Z" level=info msg="StopPodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" returns successfully" Oct 31 00:35:33.375560 systemd[1]: run-netns-cni\x2d6b944a63\x2db7ef\x2db365\x2d72de\x2da71be54de99c.mount: Deactivated successfully. Oct 31 00:35:33.376117 containerd[1461]: time="2025-10-31T00:35:33.376081882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-w9csd,Uid:e4eb1f60-118e-45dc-a64e-c81dd9882514,Namespace:calico-system,Attempt:1,}" Oct 31 00:35:33.524026 kubelet[2505]: E1031 00:35:33.523974 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:33.549974 systemd[1]: run-containerd-runc-k8s.io-8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a-runc.Rs5MI8.mount: Deactivated successfully. Oct 31 00:35:33.662772 systemd[1]: Created slice kubepods-besteffort-pod3ef5500a_a708_4695_baa1_1af98ae528f8.slice - libcontainer container kubepods-besteffort-pod3ef5500a_a708_4695_baa1_1af98ae528f8.slice. Oct 31 00:35:33.680995 kubelet[2505]: I1031 00:35:33.680896 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ef5500a-a708-4695-baa1-1af98ae528f8-whisker-ca-bundle\") pod \"whisker-896688878-jrzt2\" (UID: \"3ef5500a-a708-4695-baa1-1af98ae528f8\") " pod="calico-system/whisker-896688878-jrzt2" Oct 31 00:35:33.680995 kubelet[2505]: I1031 00:35:33.680994 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ef5500a-a708-4695-baa1-1af98ae528f8-whisker-backend-key-pair\") pod \"whisker-896688878-jrzt2\" (UID: \"3ef5500a-a708-4695-baa1-1af98ae528f8\") " pod="calico-system/whisker-896688878-jrzt2" Oct 31 00:35:33.680995 kubelet[2505]: I1031 00:35:33.681019 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7lxv\" (UniqueName: \"kubernetes.io/projected/3ef5500a-a708-4695-baa1-1af98ae528f8-kube-api-access-s7lxv\") pod \"whisker-896688878-jrzt2\" (UID: \"3ef5500a-a708-4695-baa1-1af98ae528f8\") " pod="calico-system/whisker-896688878-jrzt2" Oct 31 00:35:33.827640 systemd-networkd[1387]: cali306e55d6f29: Link UP Oct 31 00:35:33.828759 systemd-networkd[1387]: cali306e55d6f29: Gained carrier Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.689 [INFO][3922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.704 [INFO][3922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--w9csd-eth0 goldmane-666569f655- calico-system e4eb1f60-118e-45dc-a64e-c81dd9882514 948 0 2025-10-31 00:35:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-w9csd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali306e55d6f29 [] [] }} ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.705 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.750 [INFO][3936] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" HandleID="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.751 [INFO][3936] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" HandleID="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-w9csd", "timestamp":"2025-10-31 00:35:33.750363226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.751 [INFO][3936] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.751 [INFO][3936] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.751 [INFO][3936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.761 [INFO][3936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.771 [INFO][3936] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.779 [INFO][3936] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.785 [INFO][3936] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.792 [INFO][3936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.792 [INFO][3936] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.795 [INFO][3936] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.801 [INFO][3936] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.807 [INFO][3936] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.807 [INFO][3936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" host="localhost" Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.807 [INFO][3936] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:33.877559 containerd[1461]: 2025-10-31 00:35:33.807 [INFO][3936] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" HandleID="k8s-pod-network.af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.880908 containerd[1461]: 2025-10-31 00:35:33.811 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--w9csd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4eb1f60-118e-45dc-a64e-c81dd9882514", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-w9csd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali306e55d6f29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:33.880908 containerd[1461]: 2025-10-31 00:35:33.812 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.880908 containerd[1461]: 2025-10-31 00:35:33.812 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali306e55d6f29 ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.880908 containerd[1461]: 2025-10-31 00:35:33.837 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.880908 containerd[1461]: 2025-10-31 00:35:33.838 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--w9csd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4eb1f60-118e-45dc-a64e-c81dd9882514", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb", Pod:"goldmane-666569f655-w9csd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali306e55d6f29", MAC:"9a:ca:90:3f:34:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:33.880908 containerd[1461]: 2025-10-31 00:35:33.861 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb" Namespace="calico-system" Pod="goldmane-666569f655-w9csd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:33.940903 containerd[1461]: time="2025-10-31T00:35:33.940660053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:33.940903 containerd[1461]: time="2025-10-31T00:35:33.940855109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:33.942822 containerd[1461]: time="2025-10-31T00:35:33.940878894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:33.942822 containerd[1461]: time="2025-10-31T00:35:33.941058000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:33.975384 containerd[1461]: time="2025-10-31T00:35:33.975321162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-896688878-jrzt2,Uid:3ef5500a-a708-4695-baa1-1af98ae528f8,Namespace:calico-system,Attempt:0,}" Oct 31 00:35:33.988939 systemd[1]: Started cri-containerd-af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb.scope - libcontainer container af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb. Oct 31 00:35:34.033520 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:34.114509 containerd[1461]: time="2025-10-31T00:35:34.114063616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-w9csd,Uid:e4eb1f60-118e-45dc-a64e-c81dd9882514,Namespace:calico-system,Attempt:1,} returns sandbox id \"af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb\"" Oct 31 00:35:34.117765 kernel: bpftool[4150]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 00:35:34.119943 containerd[1461]: time="2025-10-31T00:35:34.119826506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:35:34.167374 systemd-networkd[1387]: cali2f2a794f017: Link UP Oct 31 00:35:34.168499 systemd-networkd[1387]: cali2f2a794f017: Gained carrier Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.069 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--896688878--jrzt2-eth0 whisker-896688878- calico-system 3ef5500a-a708-4695-baa1-1af98ae528f8 966 0 2025-10-31 00:35:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:896688878 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-896688878-jrzt2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2f2a794f017 [] [] }} ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.069 [INFO][4092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.117 [INFO][4123] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" HandleID="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Workload="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.118 [INFO][4123] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" HandleID="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Workload="localhost-k8s-whisker--896688878--jrzt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-896688878-jrzt2", "timestamp":"2025-10-31 00:35:34.117850145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.118 [INFO][4123] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.118 [INFO][4123] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.118 [INFO][4123] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.130 [INFO][4123] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.135 [INFO][4123] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.140 [INFO][4123] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.141 [INFO][4123] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.144 [INFO][4123] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.144 [INFO][4123] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.146 [INFO][4123] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4 Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.151 [INFO][4123] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.157 [INFO][4123] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.157 [INFO][4123] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" host="localhost" Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.157 [INFO][4123] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:34.189914 containerd[1461]: 2025-10-31 00:35:34.157 [INFO][4123] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" HandleID="k8s-pod-network.c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Workload="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.190885 containerd[1461]: 2025-10-31 00:35:34.161 [INFO][4092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--896688878--jrzt2-eth0", GenerateName:"whisker-896688878-", Namespace:"calico-system", SelfLink:"", UID:"3ef5500a-a708-4695-baa1-1af98ae528f8", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"896688878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-896688878-jrzt2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f2a794f017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:34.190885 containerd[1461]: 2025-10-31 00:35:34.161 [INFO][4092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.190885 containerd[1461]: 2025-10-31 00:35:34.161 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f2a794f017 ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.190885 containerd[1461]: 2025-10-31 00:35:34.169 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.190885 containerd[1461]: 2025-10-31 00:35:34.170 [INFO][4092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--896688878--jrzt2-eth0", GenerateName:"whisker-896688878-", Namespace:"calico-system", SelfLink:"", UID:"3ef5500a-a708-4695-baa1-1af98ae528f8", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"896688878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4", Pod:"whisker-896688878-jrzt2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f2a794f017", MAC:"9a:e7:c6:50:a6:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:34.190885 containerd[1461]: 2025-10-31 00:35:34.184 [INFO][4092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4" Namespace="calico-system" Pod="whisker-896688878-jrzt2" WorkloadEndpoint="localhost-k8s-whisker--896688878--jrzt2-eth0" Oct 31 00:35:34.217731 containerd[1461]: time="2025-10-31T00:35:34.217451952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:34.217731 containerd[1461]: time="2025-10-31T00:35:34.217518938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:34.217731 containerd[1461]: time="2025-10-31T00:35:34.217531501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:34.217731 containerd[1461]: time="2025-10-31T00:35:34.217673958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:34.239840 systemd[1]: Started cri-containerd-c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4.scope - libcontainer container c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4. Oct 31 00:35:34.255570 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:34.283877 containerd[1461]: time="2025-10-31T00:35:34.283828000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-896688878-jrzt2,Uid:3ef5500a-a708-4695-baa1-1af98ae528f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c090c010bd073ef3488e37206b80ec5ae07bb89fa7e66033c784dcef5b6c49d4\"" Oct 31 00:35:34.432859 systemd-networkd[1387]: vxlan.calico: Link UP Oct 31 00:35:34.435960 systemd-networkd[1387]: vxlan.calico: Gained carrier Oct 31 00:35:34.459096 containerd[1461]: time="2025-10-31T00:35:34.459014682Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:34.528257 kubelet[2505]: E1031 00:35:34.527865 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:34.587224 containerd[1461]: time="2025-10-31T00:35:34.567447158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:35:34.587407 containerd[1461]: time="2025-10-31T00:35:34.567497051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:35:34.587706 kubelet[2505]: E1031 00:35:34.587659 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:35:34.587760 kubelet[2505]: E1031 00:35:34.587727 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:35:34.589255 kubelet[2505]: E1031 00:35:34.589152 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-w9csd_calico-system(e4eb1f60-118e-45dc-a64e-c81dd9882514): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:34.589781 containerd[1461]: time="2025-10-31T00:35:34.589750421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:35:34.590932 kubelet[2505]: E1031 00:35:34.590817 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:35:34.688880 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:37588.service - OpenSSH per-connection server daemon (10.0.0.1:37588). Oct 31 00:35:34.738968 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 37588 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:34.740434 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:34.751951 systemd-logind[1446]: New session 9 of user core. Oct 31 00:35:34.757822 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 00:35:34.932691 sshd[4266]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:34.936947 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:37588.service: Deactivated successfully. Oct 31 00:35:34.939095 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:35:34.939681 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:35:34.940669 systemd-logind[1446]: Removed session 9. Oct 31 00:35:34.982525 containerd[1461]: time="2025-10-31T00:35:34.982448328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:35.074192 containerd[1461]: time="2025-10-31T00:35:35.074068931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:35:35.074373 containerd[1461]: time="2025-10-31T00:35:35.074240303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:35:35.074754 kubelet[2505]: E1031 00:35:35.074636 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:35:35.074754 kubelet[2505]: E1031 00:35:35.074707 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:35:35.075060 kubelet[2505]: E1031 00:35:35.074878 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2e67ff6c39ea42bba590dd40441d38de,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-896688878-jrzt2_calico-system(3ef5500a-a708-4695-baa1-1af98ae528f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:35.078425 containerd[1461]: time="2025-10-31T00:35:35.078359776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:35:35.279437 containerd[1461]: time="2025-10-31T00:35:35.279032993Z" level=info msg="StopPodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\"" Oct 31 00:35:35.282348 kubelet[2505]: I1031 00:35:35.282238 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72446beb-c185-42cd-b53b-e5d495ef180d" path="/var/lib/kubelet/pods/72446beb-c185-42cd-b53b-e5d495ef180d/volumes" Oct 31 00:35:35.319004 systemd-networkd[1387]: cali2f2a794f017: Gained IPv6LL Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.332 [INFO][4324] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.332 [INFO][4324] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" iface="eth0" netns="/var/run/netns/cni-1420abf4-851f-47a4-4b3d-d51669d56d11" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.332 [INFO][4324] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" iface="eth0" netns="/var/run/netns/cni-1420abf4-851f-47a4-4b3d-d51669d56d11" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.334 [INFO][4324] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" iface="eth0" netns="/var/run/netns/cni-1420abf4-851f-47a4-4b3d-d51669d56d11" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.334 [INFO][4324] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.334 [INFO][4324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.358 [INFO][4333] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.358 [INFO][4333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.358 [INFO][4333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.364 [WARNING][4333] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.364 [INFO][4333] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.366 [INFO][4333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:35.371710 containerd[1461]: 2025-10-31 00:35:35.368 [INFO][4324] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:35.374762 containerd[1461]: time="2025-10-31T00:35:35.374715829Z" level=info msg="TearDown network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" successfully" Oct 31 00:35:35.374873 containerd[1461]: time="2025-10-31T00:35:35.374847607Z" level=info msg="StopPodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" returns successfully" Oct 31 00:35:35.375311 kubelet[2505]: E1031 00:35:35.375266 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:35.375378 systemd[1]: run-netns-cni\x2d1420abf4\x2d851f\x2d47a4\x2d4b3d\x2dd51669d56d11.mount: Deactivated successfully. Oct 31 00:35:35.375753 containerd[1461]: time="2025-10-31T00:35:35.375719323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q6g7,Uid:e37cc948-78f1-4541-9003-234551988575,Namespace:kube-system,Attempt:1,}" Oct 31 00:35:35.435647 containerd[1461]: time="2025-10-31T00:35:35.435406845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:35.436767 containerd[1461]: time="2025-10-31T00:35:35.436706304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:35:35.436862 containerd[1461]: time="2025-10-31T00:35:35.436757381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:35:35.437109 kubelet[2505]: E1031 00:35:35.437044 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:35:35.437288 kubelet[2505]: E1031 00:35:35.437116 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:35:35.437367 kubelet[2505]: E1031 00:35:35.437304 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-896688878-jrzt2_calico-system(3ef5500a-a708-4695-baa1-1af98ae528f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:35.438862 kubelet[2505]: E1031 00:35:35.438791 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-896688878-jrzt2" podUID="3ef5500a-a708-4695-baa1-1af98ae528f8" Oct 31 00:35:35.518345 systemd-networkd[1387]: cali5d2752c8ecd: Link UP Oct 31 00:35:35.519541 systemd-networkd[1387]: cali5d2752c8ecd: Gained carrier Oct 31 00:35:35.532062 kubelet[2505]: E1031 00:35:35.531293 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:35:35.536518 kubelet[2505]: E1031 00:35:35.536454 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-896688878-jrzt2" podUID="3ef5500a-a708-4695-baa1-1af98ae528f8" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.429 [INFO][4342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0 coredns-668d6bf9bc- kube-system e37cc948-78f1-4541-9003-234551988575 988 0 2025-10-31 00:34:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9q6g7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d2752c8ecd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.429 [INFO][4342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.464 [INFO][4355] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" HandleID="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.465 [INFO][4355] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" HandleID="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9q6g7", "timestamp":"2025-10-31 00:35:35.464907147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.465 [INFO][4355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.465 [INFO][4355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.465 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.473 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.485 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.491 [INFO][4355] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.493 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.496 [INFO][4355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.496 [INFO][4355] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.498 [INFO][4355] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3 Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.504 [INFO][4355] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.511 [INFO][4355] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.511 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" host="localhost" Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.511 [INFO][4355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:35.540449 containerd[1461]: 2025-10-31 00:35:35.511 [INFO][4355] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" HandleID="k8s-pod-network.7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.541201 containerd[1461]: 2025-10-31 00:35:35.515 [INFO][4342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e37cc948-78f1-4541-9003-234551988575", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9q6g7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d2752c8ecd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:35.541201 containerd[1461]: 2025-10-31 00:35:35.515 [INFO][4342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.541201 containerd[1461]: 2025-10-31 00:35:35.515 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d2752c8ecd ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.541201 containerd[1461]: 2025-10-31 00:35:35.520 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.541201 containerd[1461]: 2025-10-31 00:35:35.520 [INFO][4342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e37cc948-78f1-4541-9003-234551988575", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3", Pod:"coredns-668d6bf9bc-9q6g7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d2752c8ecd", MAC:"0a:c6:ca:57:50:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:35.541201 containerd[1461]: 2025-10-31 00:35:35.535 [INFO][4342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-9q6g7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:35.571777 containerd[1461]: time="2025-10-31T00:35:35.571576599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:35.571777 containerd[1461]: time="2025-10-31T00:35:35.571757278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:35.571998 containerd[1461]: time="2025-10-31T00:35:35.571808333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:35.571998 containerd[1461]: time="2025-10-31T00:35:35.571967251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:35.601037 systemd[1]: Started cri-containerd-7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3.scope - libcontainer container 7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3. Oct 31 00:35:35.617773 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:35.649572 containerd[1461]: time="2025-10-31T00:35:35.649296673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9q6g7,Uid:e37cc948-78f1-4541-9003-234551988575,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3\"" Oct 31 00:35:35.650637 kubelet[2505]: E1031 00:35:35.650566 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:35.652416 containerd[1461]: time="2025-10-31T00:35:35.652347331Z" level=info msg="CreateContainer within sandbox \"7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:35:35.681718 containerd[1461]: time="2025-10-31T00:35:35.681659739Z" level=info msg="CreateContainer within sandbox \"7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d6afffc77bfe65d320ef47d5b98ccd33b88aae7cc1a1717e1e2c95fcdf4fecd\"" Oct 31 00:35:35.682463 containerd[1461]: time="2025-10-31T00:35:35.682363990Z" level=info msg="StartContainer for \"3d6afffc77bfe65d320ef47d5b98ccd33b88aae7cc1a1717e1e2c95fcdf4fecd\"" Oct 31 00:35:35.720876 systemd[1]: Started cri-containerd-3d6afffc77bfe65d320ef47d5b98ccd33b88aae7cc1a1717e1e2c95fcdf4fecd.scope - libcontainer container 3d6afffc77bfe65d320ef47d5b98ccd33b88aae7cc1a1717e1e2c95fcdf4fecd. Oct 31 00:35:35.819489 containerd[1461]: time="2025-10-31T00:35:35.819326843Z" level=info msg="StartContainer for \"3d6afffc77bfe65d320ef47d5b98ccd33b88aae7cc1a1717e1e2c95fcdf4fecd\" returns successfully" Oct 31 00:35:35.830104 systemd-networkd[1387]: cali306e55d6f29: Gained IPv6LL Oct 31 00:35:36.213960 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Oct 31 00:35:36.278166 containerd[1461]: time="2025-10-31T00:35:36.278111370Z" level=info msg="StopPodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\"" Oct 31 00:35:36.278431 containerd[1461]: time="2025-10-31T00:35:36.278193344Z" level=info msg="StopPodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\"" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.347 [INFO][4480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.349 [INFO][4480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" iface="eth0" netns="/var/run/netns/cni-5927c6d2-2177-834a-943a-1aabfe24106d" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.349 [INFO][4480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" iface="eth0" netns="/var/run/netns/cni-5927c6d2-2177-834a-943a-1aabfe24106d" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.350 [INFO][4480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" iface="eth0" netns="/var/run/netns/cni-5927c6d2-2177-834a-943a-1aabfe24106d" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.350 [INFO][4480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.350 [INFO][4480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.381 [INFO][4497] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.381 [INFO][4497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.381 [INFO][4497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.390 [WARNING][4497] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.390 [INFO][4497] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.392 [INFO][4497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:36.397769 containerd[1461]: 2025-10-31 00:35:36.394 [INFO][4480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:36.400789 containerd[1461]: time="2025-10-31T00:35:36.400728476Z" level=info msg="TearDown network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" successfully" Oct 31 00:35:36.400789 containerd[1461]: time="2025-10-31T00:35:36.400776185Z" level=info msg="StopPodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" returns successfully" Oct 31 00:35:36.402513 containerd[1461]: time="2025-10-31T00:35:36.401798104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-scbt9,Uid:34cdfd35-dce3-49cb-bd9c-4e5cde095d40,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:35:36.404803 systemd[1]: run-netns-cni\x2d5927c6d2\x2d2177\x2d834a\x2d943a\x2d1aabfe24106d.mount: Deactivated successfully. Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.352 [INFO][4481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.352 [INFO][4481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" iface="eth0" netns="/var/run/netns/cni-7173c922-a620-9a0f-a96f-693e03a3b42d" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.353 [INFO][4481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" iface="eth0" netns="/var/run/netns/cni-7173c922-a620-9a0f-a96f-693e03a3b42d" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.353 [INFO][4481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" iface="eth0" netns="/var/run/netns/cni-7173c922-a620-9a0f-a96f-693e03a3b42d" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.353 [INFO][4481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.353 [INFO][4481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.401 [INFO][4501] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.402 [INFO][4501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.402 [INFO][4501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.410 [WARNING][4501] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.410 [INFO][4501] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.411 [INFO][4501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:36.419515 containerd[1461]: 2025-10-31 00:35:36.415 [INFO][4481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:36.419981 containerd[1461]: time="2025-10-31T00:35:36.419729835Z" level=info msg="TearDown network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" successfully" Oct 31 00:35:36.419981 containerd[1461]: time="2025-10-31T00:35:36.419760042Z" level=info msg="StopPodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" returns successfully" Oct 31 00:35:36.420516 containerd[1461]: time="2025-10-31T00:35:36.420492516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-hslc5,Uid:cee64e5a-057c-4a2f-b352-eb76f50e925c,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:35:36.424904 systemd[1]: run-netns-cni\x2d7173c922\x2da620\x2d9a0f\x2da96f\x2d693e03a3b42d.mount: Deactivated successfully. Oct 31 00:35:36.536114 kubelet[2505]: E1031 00:35:36.535895 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:36.549340 kubelet[2505]: I1031 00:35:36.549124 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9q6g7" podStartSLOduration=41.549104104 podStartE2EDuration="41.549104104s" podCreationTimestamp="2025-10-31 00:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:35:36.548521691 +0000 UTC m=+47.382504479" watchObservedRunningTime="2025-10-31 00:35:36.549104104 +0000 UTC m=+47.383086893" Oct 31 00:35:36.588516 systemd-networkd[1387]: cali43fb05a998f: Link UP Oct 31 00:35:36.590081 systemd-networkd[1387]: cali43fb05a998f: Gained carrier Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.487 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0 calico-apiserver-65865f79c6- calico-apiserver 34cdfd35-dce3-49cb-bd9c-4e5cde095d40 1022 0 2025-10-31 00:35:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65865f79c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65865f79c6-scbt9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43fb05a998f [] [] }} ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.487 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4543] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" HandleID="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4543] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" HandleID="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65865f79c6-scbt9", "timestamp":"2025-10-31 00:35:36.522081388 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.529 [INFO][4543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.539 [INFO][4543] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.547 [INFO][4543] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.553 [INFO][4543] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.556 [INFO][4543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.557 [INFO][4543] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.558 [INFO][4543] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.566 [INFO][4543] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.577 [INFO][4543] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.577 [INFO][4543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" host="localhost" Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.577 [INFO][4543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:36.614848 containerd[1461]: 2025-10-31 00:35:36.577 [INFO][4543] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" HandleID="k8s-pod-network.bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.615790 containerd[1461]: 2025-10-31 00:35:36.582 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"34cdfd35-dce3-49cb-bd9c-4e5cde095d40", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65865f79c6-scbt9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43fb05a998f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:36.615790 containerd[1461]: 2025-10-31 00:35:36.583 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.615790 containerd[1461]: 2025-10-31 00:35:36.583 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43fb05a998f ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.615790 containerd[1461]: 2025-10-31 00:35:36.590 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.615790 containerd[1461]: 2025-10-31 00:35:36.595 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"34cdfd35-dce3-49cb-bd9c-4e5cde095d40", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a", Pod:"calico-apiserver-65865f79c6-scbt9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43fb05a998f", MAC:"b2:38:f9:7f:52:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:36.615790 containerd[1461]: 2025-10-31 00:35:36.608 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-scbt9" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:36.654873 containerd[1461]: time="2025-10-31T00:35:36.654256746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:36.654873 containerd[1461]: time="2025-10-31T00:35:36.654333480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:36.654873 containerd[1461]: time="2025-10-31T00:35:36.654347947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:36.654873 containerd[1461]: time="2025-10-31T00:35:36.654455530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:36.685888 systemd[1]: Started cri-containerd-bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a.scope - libcontainer container bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a. Oct 31 00:35:36.691866 systemd-networkd[1387]: cali0a22e4140b6: Link UP Oct 31 00:35:36.693333 systemd-networkd[1387]: cali0a22e4140b6: Gained carrier Oct 31 00:35:36.710477 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.488 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0 calico-apiserver-65865f79c6- calico-apiserver cee64e5a-057c-4a2f-b352-eb76f50e925c 1023 0 2025-10-31 00:35:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65865f79c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65865f79c6-hslc5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0a22e4140b6 [] [] }} ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.488 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" HandleID="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.522 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" HandleID="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65865f79c6-hslc5", "timestamp":"2025-10-31 00:35:36.522732831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.523 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.578 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.579 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.630 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.640 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.650 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.653 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.660 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.661 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.663 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.668 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.676 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.677 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" host="localhost" Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.677 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:36.714978 containerd[1461]: 2025-10-31 00:35:36.677 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" HandleID="k8s-pod-network.23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.715823 containerd[1461]: 2025-10-31 00:35:36.685 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"cee64e5a-057c-4a2f-b352-eb76f50e925c", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65865f79c6-hslc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a22e4140b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:36.715823 containerd[1461]: 2025-10-31 00:35:36.685 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.715823 containerd[1461]: 2025-10-31 00:35:36.685 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a22e4140b6 ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.715823 containerd[1461]: 2025-10-31 00:35:36.694 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.715823 containerd[1461]: 2025-10-31 00:35:36.695 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"cee64e5a-057c-4a2f-b352-eb76f50e925c", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca", Pod:"calico-apiserver-65865f79c6-hslc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a22e4140b6", MAC:"52:32:a4:a2:56:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:36.715823 containerd[1461]: 2025-10-31 00:35:36.708 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca" Namespace="calico-apiserver" Pod="calico-apiserver-65865f79c6-hslc5" WorkloadEndpoint="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:36.725814 systemd-networkd[1387]: cali5d2752c8ecd: Gained IPv6LL Oct 31 00:35:36.755494 containerd[1461]: time="2025-10-31T00:35:36.755329880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:36.755832 containerd[1461]: time="2025-10-31T00:35:36.755715584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:36.760731 containerd[1461]: time="2025-10-31T00:35:36.758337375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:36.760731 containerd[1461]: time="2025-10-31T00:35:36.758567317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:36.776089 containerd[1461]: time="2025-10-31T00:35:36.775997968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-scbt9,Uid:34cdfd35-dce3-49cb-bd9c-4e5cde095d40,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a\"" Oct 31 00:35:36.779664 containerd[1461]: time="2025-10-31T00:35:36.778996626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:35:36.792832 systemd[1]: Started cri-containerd-23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca.scope - libcontainer container 23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca. Oct 31 00:35:36.810629 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:36.844038 containerd[1461]: time="2025-10-31T00:35:36.843982788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65865f79c6-hslc5,Uid:cee64e5a-057c-4a2f-b352-eb76f50e925c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca\"" Oct 31 00:35:37.119411 containerd[1461]: time="2025-10-31T00:35:37.119226309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:37.120377 containerd[1461]: time="2025-10-31T00:35:37.120332025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:35:37.120463 containerd[1461]: time="2025-10-31T00:35:37.120426251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:35:37.120649 kubelet[2505]: E1031 00:35:37.120576 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:37.120741 kubelet[2505]: E1031 00:35:37.120650 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:37.121004 kubelet[2505]: E1031 00:35:37.120934 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dfr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65865f79c6-scbt9_calico-apiserver(34cdfd35-dce3-49cb-bd9c-4e5cde095d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:37.121116 containerd[1461]: time="2025-10-31T00:35:37.121073226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:35:37.122667 kubelet[2505]: E1031 00:35:37.122633 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:35:37.278892 containerd[1461]: time="2025-10-31T00:35:37.278783319Z" level=info msg="StopPodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\"" Oct 31 00:35:37.279546 containerd[1461]: time="2025-10-31T00:35:37.279256317Z" level=info msg="StopPodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\"" Oct 31 00:35:37.279671 containerd[1461]: time="2025-10-31T00:35:37.279633515Z" level=info msg="StopPodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\"" Oct 31 00:35:37.498169 containerd[1461]: time="2025-10-31T00:35:37.497964169Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:37.520976 containerd[1461]: time="2025-10-31T00:35:37.520880754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:35:37.521474 containerd[1461]: time="2025-10-31T00:35:37.521029623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:35:37.521556 kubelet[2505]: E1031 00:35:37.521492 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:37.521632 kubelet[2505]: E1031 00:35:37.521557 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:37.522678 kubelet[2505]: E1031 00:35:37.521797 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db27p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65865f79c6-hslc5_calico-apiserver(cee64e5a-057c-4a2f-b352-eb76f50e925c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:37.523018 kubelet[2505]: E1031 00:35:37.522977 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:35:37.545641 kubelet[2505]: E1031 00:35:37.545263 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:37.545641 kubelet[2505]: E1031 00:35:37.545408 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:35:37.548223 kubelet[2505]: E1031 00:35:37.547884 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.525 [INFO][4696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.526 [INFO][4696] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" iface="eth0" netns="/var/run/netns/cni-a29f7ee8-fcdd-ad29-bf7f-1a1bd48bc982" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.528 [INFO][4696] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" iface="eth0" netns="/var/run/netns/cni-a29f7ee8-fcdd-ad29-bf7f-1a1bd48bc982" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.528 [INFO][4696] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" iface="eth0" netns="/var/run/netns/cni-a29f7ee8-fcdd-ad29-bf7f-1a1bd48bc982" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.528 [INFO][4696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.528 [INFO][4696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.561 [INFO][4720] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.561 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.564 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.585 [WARNING][4720] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.585 [INFO][4720] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.591 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:37.607066 containerd[1461]: 2025-10-31 00:35:37.601 [INFO][4696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:37.607893 containerd[1461]: time="2025-10-31T00:35:37.607250187Z" level=info msg="TearDown network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" successfully" Oct 31 00:35:37.607893 containerd[1461]: time="2025-10-31T00:35:37.607278640Z" level=info msg="StopPodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" returns successfully" Oct 31 00:35:37.611369 containerd[1461]: time="2025-10-31T00:35:37.610874089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f5ddbf58-crgcv,Uid:7d0df6fc-c714-44ff-8fdd-63dc2197c8ef,Namespace:calico-system,Attempt:1,}" Oct 31 00:35:37.613790 systemd[1]: run-netns-cni\x2da29f7ee8\x2dfcdd\x2dad29\x2dbf7f\x2d1a1bd48bc982.mount: Deactivated successfully. Oct 31 00:35:37.622957 systemd-networkd[1387]: cali43fb05a998f: Gained IPv6LL Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.528 [INFO][4697] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.528 [INFO][4697] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" iface="eth0" netns="/var/run/netns/cni-6262f1a5-d00a-db89-98cc-a1e9685e7573" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.529 [INFO][4697] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" iface="eth0" netns="/var/run/netns/cni-6262f1a5-d00a-db89-98cc-a1e9685e7573" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.532 [INFO][4697] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" iface="eth0" netns="/var/run/netns/cni-6262f1a5-d00a-db89-98cc-a1e9685e7573" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.532 [INFO][4697] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.532 [INFO][4697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.604 [INFO][4726] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.604 [INFO][4726] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.604 [INFO][4726] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.616 [WARNING][4726] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.617 [INFO][4726] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.620 [INFO][4726] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:37.638219 containerd[1461]: 2025-10-31 00:35:37.634 [INFO][4697] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:37.640238 containerd[1461]: time="2025-10-31T00:35:37.638888484Z" level=info msg="TearDown network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" successfully" Oct 31 00:35:37.640238 containerd[1461]: time="2025-10-31T00:35:37.638924972Z" level=info msg="StopPodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" returns successfully" Oct 31 00:35:37.642250 kubelet[2505]: E1031 00:35:37.642065 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:37.642944 systemd[1]: run-netns-cni\x2d6262f1a5\x2dd00a\x2ddb89\x2d98cc\x2da1e9685e7573.mount: Deactivated successfully. Oct 31 00:35:37.645760 containerd[1461]: time="2025-10-31T00:35:37.644302846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb6ph,Uid:4668b7ce-66a9-46c0-aadc-2d2b9be34740,Namespace:kube-system,Attempt:1,}" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.532 [INFO][4698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.532 [INFO][4698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" iface="eth0" netns="/var/run/netns/cni-ddb67087-bcd2-2ac9-8d41-5e3789ec0ece" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.537 [INFO][4698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" iface="eth0" netns="/var/run/netns/cni-ddb67087-bcd2-2ac9-8d41-5e3789ec0ece" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.538 [INFO][4698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" iface="eth0" netns="/var/run/netns/cni-ddb67087-bcd2-2ac9-8d41-5e3789ec0ece" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.538 [INFO][4698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.538 [INFO][4698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.603 [INFO][4729] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.606 [INFO][4729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.621 [INFO][4729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.635 [WARNING][4729] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.635 [INFO][4729] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.640 [INFO][4729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:37.650996 containerd[1461]: 2025-10-31 00:35:37.647 [INFO][4698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:37.651431 containerd[1461]: time="2025-10-31T00:35:37.651250778Z" level=info msg="TearDown network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" successfully" Oct 31 00:35:37.651431 containerd[1461]: time="2025-10-31T00:35:37.651329036Z" level=info msg="StopPodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" returns successfully" Oct 31 00:35:37.652649 containerd[1461]: time="2025-10-31T00:35:37.652571718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwgh9,Uid:7b2b437c-e155-49e7-bd08-33863840f302,Namespace:calico-system,Attempt:1,}" Oct 31 00:35:37.654065 systemd[1]: run-netns-cni\x2dddb67087\x2dbcd2\x2d2ac9\x2d8d41\x2d5e3789ec0ece.mount: Deactivated successfully. Oct 31 00:35:38.026705 systemd-networkd[1387]: cali33b8591650b: Link UP Oct 31 00:35:38.028131 systemd-networkd[1387]: cali33b8591650b: Gained carrier Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.788 [INFO][4749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0 calico-kube-controllers-86f5ddbf58- calico-system 7d0df6fc-c714-44ff-8fdd-63dc2197c8ef 1048 0 2025-10-31 00:35:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86f5ddbf58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86f5ddbf58-crgcv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali33b8591650b [] [] }} ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.788 [INFO][4749] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.821 [INFO][4792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" HandleID="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.821 [INFO][4792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" HandleID="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86f5ddbf58-crgcv", "timestamp":"2025-10-31 00:35:37.821235242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.821 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.821 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.821 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.866 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.873 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.891 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.897 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.900 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.900 [INFO][4792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.902 [INFO][4792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:37.952 [INFO][4792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:38.019 [INFO][4792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:38.019 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" host="localhost" Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:38.019 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:38.060059 containerd[1461]: 2025-10-31 00:35:38.019 [INFO][4792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" HandleID="k8s-pod-network.04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.060923 containerd[1461]: 2025-10-31 00:35:38.023 [INFO][4749] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0", GenerateName:"calico-kube-controllers-86f5ddbf58-", Namespace:"calico-system", SelfLink:"", UID:"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f5ddbf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86f5ddbf58-crgcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33b8591650b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:38.060923 containerd[1461]: 2025-10-31 00:35:38.024 [INFO][4749] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.060923 containerd[1461]: 2025-10-31 00:35:38.024 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33b8591650b ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.060923 containerd[1461]: 2025-10-31 00:35:38.027 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.060923 containerd[1461]: 2025-10-31 00:35:38.028 [INFO][4749] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0", GenerateName:"calico-kube-controllers-86f5ddbf58-", Namespace:"calico-system", SelfLink:"", UID:"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f5ddbf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e", Pod:"calico-kube-controllers-86f5ddbf58-crgcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33b8591650b", MAC:"76:f0:42:0e:bf:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:38.060923 containerd[1461]: 2025-10-31 00:35:38.054 [INFO][4749] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e" Namespace="calico-system" Pod="calico-kube-controllers-86f5ddbf58-crgcv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:38.095681 systemd-networkd[1387]: cali6526a5c070a: Link UP Oct 31 00:35:38.097746 systemd-networkd[1387]: cali6526a5c070a: Gained carrier Oct 31 00:35:38.115666 containerd[1461]: time="2025-10-31T00:35:38.115517520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:38.115666 containerd[1461]: time="2025-10-31T00:35:38.115577582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:38.115924 containerd[1461]: time="2025-10-31T00:35:38.115593712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:38.117798 containerd[1461]: time="2025-10-31T00:35:38.116785610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:37.808 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hwgh9-eth0 csi-node-driver- calico-system 7b2b437c-e155-49e7-bd08-33863840f302 1051 0 2025-10-31 00:35:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hwgh9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6526a5c070a [] [] }} ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:37.809 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:37.897 [INFO][4805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" HandleID="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:37.897 [INFO][4805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" HandleID="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a7250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hwgh9", "timestamp":"2025-10-31 00:35:37.897404638 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:37.897 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.019 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.019 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.027 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.038 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.057 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.060 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.063 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.063 [INFO][4805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.066 [INFO][4805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.074 [INFO][4805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.086 [INFO][4805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.086 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" host="localhost" Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.086 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:38.127359 containerd[1461]: 2025-10-31 00:35:38.086 [INFO][4805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" HandleID="k8s-pod-network.b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.128003 containerd[1461]: 2025-10-31 00:35:38.091 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwgh9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b2b437c-e155-49e7-bd08-33863840f302", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hwgh9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6526a5c070a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:38.128003 containerd[1461]: 2025-10-31 00:35:38.091 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.128003 containerd[1461]: 2025-10-31 00:35:38.091 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6526a5c070a ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.128003 containerd[1461]: 2025-10-31 00:35:38.098 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.128003 containerd[1461]: 2025-10-31 00:35:38.098 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwgh9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b2b437c-e155-49e7-bd08-33863840f302", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c", Pod:"csi-node-driver-hwgh9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6526a5c070a", MAC:"ae:0c:49:1b:fa:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:38.128003 containerd[1461]: 2025-10-31 00:35:38.111 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c" Namespace="calico-system" Pod="csi-node-driver-hwgh9" WorkloadEndpoint="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:38.133885 systemd-networkd[1387]: cali0a22e4140b6: Gained IPv6LL Oct 31 00:35:38.147040 systemd[1]: Started cri-containerd-04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e.scope - libcontainer container 04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e. Oct 31 00:35:38.163801 containerd[1461]: time="2025-10-31T00:35:38.163302601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:38.163801 containerd[1461]: time="2025-10-31T00:35:38.163467931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:38.163801 containerd[1461]: time="2025-10-31T00:35:38.163490493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:38.163801 containerd[1461]: time="2025-10-31T00:35:38.163650314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:38.207879 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:38.209873 systemd[1]: Started cri-containerd-b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c.scope - libcontainer container b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c. Oct 31 00:35:38.225883 systemd-networkd[1387]: cali5b025e83842: Link UP Oct 31 00:35:38.230552 systemd-networkd[1387]: cali5b025e83842: Gained carrier Oct 31 00:35:38.234835 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:37.809 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0 coredns-668d6bf9bc- kube-system 4668b7ce-66a9-46c0-aadc-2d2b9be34740 1050 0 2025-10-31 00:34:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-xb6ph eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b025e83842 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:37.810 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:37.901 [INFO][4804] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" HandleID="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:37.902 [INFO][4804] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" HandleID="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-xb6ph", "timestamp":"2025-10-31 00:35:37.901869529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:37.902 [INFO][4804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.087 [INFO][4804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.087 [INFO][4804] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.127 [INFO][4804] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.138 [INFO][4804] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.158 [INFO][4804] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.162 [INFO][4804] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.166 [INFO][4804] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.166 [INFO][4804] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.168 [INFO][4804] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631 Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.174 [INFO][4804] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.199 [INFO][4804] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.205 [INFO][4804] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" host="localhost" Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.205 [INFO][4804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:38.247812 containerd[1461]: 2025-10-31 00:35:38.205 [INFO][4804] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" HandleID="k8s-pod-network.df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.249041 containerd[1461]: 2025-10-31 00:35:38.219 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4668b7ce-66a9-46c0-aadc-2d2b9be34740", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-xb6ph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b025e83842", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:38.249041 containerd[1461]: 2025-10-31 00:35:38.220 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.249041 containerd[1461]: 2025-10-31 00:35:38.220 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b025e83842 ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.249041 containerd[1461]: 2025-10-31 00:35:38.231 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.249041 containerd[1461]: 2025-10-31 00:35:38.233 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4668b7ce-66a9-46c0-aadc-2d2b9be34740", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631", Pod:"coredns-668d6bf9bc-xb6ph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b025e83842", MAC:"e2:5e:8d:24:de:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:38.249041 containerd[1461]: 2025-10-31 00:35:38.242 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb6ph" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:38.260942 containerd[1461]: time="2025-10-31T00:35:38.260880952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hwgh9,Uid:7b2b437c-e155-49e7-bd08-33863840f302,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c\"" Oct 31 00:35:38.262934 containerd[1461]: time="2025-10-31T00:35:38.262738499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:35:38.283778 containerd[1461]: time="2025-10-31T00:35:38.283251381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:35:38.283778 containerd[1461]: time="2025-10-31T00:35:38.283334347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:35:38.283778 containerd[1461]: time="2025-10-31T00:35:38.283348804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:38.283778 containerd[1461]: time="2025-10-31T00:35:38.283553858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:35:38.289635 containerd[1461]: time="2025-10-31T00:35:38.289571483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f5ddbf58-crgcv,Uid:7d0df6fc-c714-44ff-8fdd-63dc2197c8ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e\"" Oct 31 00:35:38.310824 systemd[1]: Started cri-containerd-df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631.scope - libcontainer container df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631. Oct 31 00:35:38.330389 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:35:38.360129 containerd[1461]: time="2025-10-31T00:35:38.360078122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb6ph,Uid:4668b7ce-66a9-46c0-aadc-2d2b9be34740,Namespace:kube-system,Attempt:1,} returns sandbox id \"df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631\"" Oct 31 00:35:38.361028 kubelet[2505]: E1031 00:35:38.360835 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:38.362997 containerd[1461]: time="2025-10-31T00:35:38.362958348Z" level=info msg="CreateContainer within sandbox \"df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:35:38.380490 containerd[1461]: time="2025-10-31T00:35:38.380420544Z" level=info msg="CreateContainer within sandbox \"df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdf29661fd094659b18e737922ec3ed3cb2de50fd418193b08a1bd6a94809bd7\"" Oct 31 00:35:38.380960 containerd[1461]: time="2025-10-31T00:35:38.380928177Z" level=info msg="StartContainer for \"fdf29661fd094659b18e737922ec3ed3cb2de50fd418193b08a1bd6a94809bd7\"" Oct 31 00:35:38.416808 systemd[1]: Started cri-containerd-fdf29661fd094659b18e737922ec3ed3cb2de50fd418193b08a1bd6a94809bd7.scope - libcontainer container fdf29661fd094659b18e737922ec3ed3cb2de50fd418193b08a1bd6a94809bd7. Oct 31 00:35:38.465511 containerd[1461]: time="2025-10-31T00:35:38.465437809Z" level=info msg="StartContainer for \"fdf29661fd094659b18e737922ec3ed3cb2de50fd418193b08a1bd6a94809bd7\" returns successfully" Oct 31 00:35:38.554792 kubelet[2505]: E1031 00:35:38.554364 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:38.555298 kubelet[2505]: E1031 00:35:38.554975 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:35:38.556189 kubelet[2505]: E1031 00:35:38.556068 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:35:38.556189 kubelet[2505]: E1031 00:35:38.556072 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:38.593737 kubelet[2505]: I1031 00:35:38.593260 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xb6ph" podStartSLOduration=43.593239669 podStartE2EDuration="43.593239669s" podCreationTimestamp="2025-10-31 00:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:35:38.574625981 +0000 UTC m=+49.408608759" watchObservedRunningTime="2025-10-31 00:35:38.593239669 +0000 UTC m=+49.427222447" Oct 31 00:35:38.663876 containerd[1461]: time="2025-10-31T00:35:38.663733113Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:38.665222 containerd[1461]: time="2025-10-31T00:35:38.665184667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:35:38.665345 containerd[1461]: time="2025-10-31T00:35:38.665258926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:35:38.666026 kubelet[2505]: E1031 00:35:38.665527 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:35:38.666026 kubelet[2505]: E1031 00:35:38.665620 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:35:38.666026 kubelet[2505]: E1031 00:35:38.665946 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:38.666562 containerd[1461]: time="2025-10-31T00:35:38.666537696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:35:39.068921 containerd[1461]: time="2025-10-31T00:35:39.068770779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:39.155991 containerd[1461]: time="2025-10-31T00:35:39.155899727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:35:39.156197 containerd[1461]: time="2025-10-31T00:35:39.156013662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:35:39.156238 kubelet[2505]: E1031 00:35:39.156153 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:35:39.156238 kubelet[2505]: E1031 00:35:39.156210 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:35:39.156950 kubelet[2505]: E1031 00:35:39.156509 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrdvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f5ddbf58-crgcv_calico-system(7d0df6fc-c714-44ff-8fdd-63dc2197c8ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:39.157081 containerd[1461]: time="2025-10-31T00:35:39.156681245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:35:39.158384 kubelet[2505]: E1031 00:35:39.158342 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:35:39.414989 systemd-networkd[1387]: cali5b025e83842: Gained IPv6LL Oct 31 00:35:39.556409 kubelet[2505]: E1031 00:35:39.556368 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:39.557101 kubelet[2505]: E1031 00:35:39.557074 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:35:39.560072 containerd[1461]: time="2025-10-31T00:35:39.560005581Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:39.571855 containerd[1461]: time="2025-10-31T00:35:39.571753950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:35:39.572085 containerd[1461]: time="2025-10-31T00:35:39.571887662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:35:39.572171 kubelet[2505]: E1031 00:35:39.572070 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:35:39.572171 kubelet[2505]: E1031 00:35:39.572141 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:35:39.572437 kubelet[2505]: E1031 00:35:39.572354 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:39.573989 kubelet[2505]: E1031 00:35:39.573854 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:39.861917 systemd-networkd[1387]: cali6526a5c070a: Gained IPv6LL Oct 31 00:35:39.954133 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:37592.service - OpenSSH per-connection server daemon (10.0.0.1:37592). Oct 31 00:35:39.989857 systemd-networkd[1387]: cali33b8591650b: Gained IPv6LL Oct 31 00:35:40.012485 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 37592 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:40.014342 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:40.020144 systemd-logind[1446]: New session 10 of user core. Oct 31 00:35:40.029858 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:35:40.182415 sshd[5023]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:40.187189 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:37592.service: Deactivated successfully. Oct 31 00:35:40.189569 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:35:40.190457 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:35:40.191676 systemd-logind[1446]: Removed session 10. Oct 31 00:35:40.558860 kubelet[2505]: E1031 00:35:40.558717 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:35:40.561456 kubelet[2505]: E1031 00:35:40.561388 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:45.194082 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:59150.service - OpenSSH per-connection server daemon (10.0.0.1:59150). Oct 31 00:35:45.228177 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:45.230387 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:45.239146 systemd-logind[1446]: New session 11 of user core. Oct 31 00:35:45.243792 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:35:45.358325 sshd[5046]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:45.367435 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:59150.service: Deactivated successfully. Oct 31 00:35:45.369445 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:35:45.371403 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:35:45.381964 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). Oct 31 00:35:45.382971 systemd-logind[1446]: Removed session 11. Oct 31 00:35:45.411145 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:45.412894 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:45.417762 systemd-logind[1446]: New session 12 of user core. Oct 31 00:35:45.423832 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:35:45.578557 sshd[5061]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:45.588858 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:59166.service: Deactivated successfully. Oct 31 00:35:45.592471 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:35:45.596436 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:35:45.610143 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:59172.service - OpenSSH per-connection server daemon (10.0.0.1:59172). Oct 31 00:35:45.610906 systemd-logind[1446]: Removed session 12. Oct 31 00:35:45.646274 sshd[5073]: Accepted publickey for core from 10.0.0.1 port 59172 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:45.648311 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:45.652592 systemd-logind[1446]: New session 13 of user core. Oct 31 00:35:45.663895 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:35:45.793656 sshd[5073]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:45.798217 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:59172.service: Deactivated successfully. Oct 31 00:35:45.800359 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:35:45.801085 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:35:45.802167 systemd-logind[1446]: Removed session 13. Oct 31 00:35:47.279337 containerd[1461]: time="2025-10-31T00:35:47.278909702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:35:47.839275 containerd[1461]: time="2025-10-31T00:35:47.839200805Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:47.870624 containerd[1461]: time="2025-10-31T00:35:47.870525825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:35:47.870983 containerd[1461]: time="2025-10-31T00:35:47.870559019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:35:47.871173 kubelet[2505]: E1031 00:35:47.871064 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:35:47.871627 kubelet[2505]: E1031 00:35:47.871185 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:35:47.871627 kubelet[2505]: E1031 00:35:47.871342 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2e67ff6c39ea42bba590dd40441d38de,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-896688878-jrzt2_calico-system(3ef5500a-a708-4695-baa1-1af98ae528f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:47.874012 containerd[1461]: time="2025-10-31T00:35:47.873953024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:35:48.213418 containerd[1461]: time="2025-10-31T00:35:48.213238115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:48.248540 containerd[1461]: time="2025-10-31T00:35:48.248354850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:35:48.248719 containerd[1461]: time="2025-10-31T00:35:48.248508407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:35:48.248847 kubelet[2505]: E1031 00:35:48.248780 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:35:48.248924 kubelet[2505]: E1031 00:35:48.248883 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:35:48.249167 kubelet[2505]: E1031 00:35:48.249099 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-896688878-jrzt2_calico-system(3ef5500a-a708-4695-baa1-1af98ae528f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:48.250474 kubelet[2505]: E1031 00:35:48.250354 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-896688878-jrzt2" podUID="3ef5500a-a708-4695-baa1-1af98ae528f8" Oct 31 00:35:49.264106 containerd[1461]: time="2025-10-31T00:35:49.264047289Z" level=info msg="StopPodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\"" Oct 31 00:35:49.310031 containerd[1461]: time="2025-10-31T00:35:49.280417975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.397 [WARNING][5097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0", GenerateName:"calico-kube-controllers-86f5ddbf58-", Namespace:"calico-system", SelfLink:"", UID:"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f5ddbf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e", Pod:"calico-kube-controllers-86f5ddbf58-crgcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33b8591650b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.397 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.397 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" iface="eth0" netns="" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.397 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.397 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.421 [INFO][5108] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.421 [INFO][5108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.421 [INFO][5108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.538 [WARNING][5108] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.538 [INFO][5108] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.582 [INFO][5108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:49.589616 containerd[1461]: 2025-10-31 00:35:49.586 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.590220 containerd[1461]: time="2025-10-31T00:35:49.589691119Z" level=info msg="TearDown network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" successfully" Oct 31 00:35:49.590220 containerd[1461]: time="2025-10-31T00:35:49.589730855Z" level=info msg="StopPodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" returns successfully" Oct 31 00:35:49.590669 containerd[1461]: time="2025-10-31T00:35:49.590586783Z" level=info msg="RemovePodSandbox for \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\"" Oct 31 00:35:49.593313 containerd[1461]: time="2025-10-31T00:35:49.593276362Z" level=info msg="Forcibly stopping sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\"" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.629 [WARNING][5127] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0", GenerateName:"calico-kube-controllers-86f5ddbf58-", Namespace:"calico-system", SelfLink:"", UID:"7d0df6fc-c714-44ff-8fdd-63dc2197c8ef", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f5ddbf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04d78af538368fcd395e3306dcd39bf6c84a766c49efccaa6cc2a196be20f76e", Pod:"calico-kube-controllers-86f5ddbf58-crgcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali33b8591650b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.629 [INFO][5127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.629 [INFO][5127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" iface="eth0" netns="" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.629 [INFO][5127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.629 [INFO][5127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.652 [INFO][5135] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.653 [INFO][5135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.653 [INFO][5135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.834 [WARNING][5135] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.834 [INFO][5135] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" HandleID="k8s-pod-network.caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Workload="localhost-k8s-calico--kube--controllers--86f5ddbf58--crgcv-eth0" Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.837 [INFO][5135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:49.843062 containerd[1461]: 2025-10-31 00:35:49.840 [INFO][5127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f" Oct 31 00:35:49.843062 containerd[1461]: time="2025-10-31T00:35:49.843026676Z" level=info msg="TearDown network for sandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" successfully" Oct 31 00:35:49.852375 containerd[1461]: time="2025-10-31T00:35:49.852293654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:49.852375 containerd[1461]: time="2025-10-31T00:35:49.852379780Z" level=info msg="RemovePodSandbox \"caf904c7c5dc3c8a43b2e4616d435b8de69047738baf9c60ef1bf8bf82e90f7f\" returns successfully" Oct 31 00:35:49.853268 containerd[1461]: time="2025-10-31T00:35:49.853194849Z" level=info msg="StopPodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\"" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.894 [WARNING][5153] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"34cdfd35-dce3-49cb-bd9c-4e5cde095d40", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a", Pod:"calico-apiserver-65865f79c6-scbt9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43fb05a998f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.895 [INFO][5153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.895 [INFO][5153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" iface="eth0" netns="" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.895 [INFO][5153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.895 [INFO][5153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.924 [INFO][5161] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.924 [INFO][5161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.924 [INFO][5161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.930 [WARNING][5161] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.930 [INFO][5161] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.933 [INFO][5161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:49.939097 containerd[1461]: 2025-10-31 00:35:49.936 [INFO][5153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:49.939717 containerd[1461]: time="2025-10-31T00:35:49.939152287Z" level=info msg="TearDown network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" successfully" Oct 31 00:35:49.939717 containerd[1461]: time="2025-10-31T00:35:49.939203907Z" level=info msg="StopPodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" returns successfully" Oct 31 00:35:49.939912 containerd[1461]: time="2025-10-31T00:35:49.939881960Z" level=info msg="RemovePodSandbox for \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\"" Oct 31 00:35:49.939959 containerd[1461]: time="2025-10-31T00:35:49.939918401Z" level=info msg="Forcibly stopping sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\"" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.978 [WARNING][5179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"34cdfd35-dce3-49cb-bd9c-4e5cde095d40", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf91a039a3b919b95437d6ba0f636d610d26bc83dbdaaf54c68deabdbbbcf30a", Pod:"calico-apiserver-65865f79c6-scbt9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43fb05a998f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.979 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.979 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" iface="eth0" netns="" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.979 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.979 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.997 [INFO][5188] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.998 [INFO][5188] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:49.998 [INFO][5188] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:50.004 [WARNING][5188] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:50.004 [INFO][5188] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" HandleID="k8s-pod-network.163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Workload="localhost-k8s-calico--apiserver--65865f79c6--scbt9-eth0" Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:50.006 [INFO][5188] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.011923 containerd[1461]: 2025-10-31 00:35:50.009 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409" Oct 31 00:35:50.012525 containerd[1461]: time="2025-10-31T00:35:50.011985216Z" level=info msg="TearDown network for sandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" successfully" Oct 31 00:35:50.046963 containerd[1461]: time="2025-10-31T00:35:50.046906411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:50.088991 containerd[1461]: time="2025-10-31T00:35:50.088947232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:35:50.089110 containerd[1461]: time="2025-10-31T00:35:50.089031977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:35:50.089240 kubelet[2505]: E1031 00:35:50.089185 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:35:50.089782 kubelet[2505]: E1031 00:35:50.089258 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:35:50.089782 kubelet[2505]: E1031 00:35:50.089395 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-w9csd_calico-system(e4eb1f60-118e-45dc-a64e-c81dd9882514): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:50.090743 kubelet[2505]: E1031 00:35:50.090575 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:35:50.091648 containerd[1461]: time="2025-10-31T00:35:50.091618080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:50.091719 containerd[1461]: time="2025-10-31T00:35:50.091679790Z" level=info msg="RemovePodSandbox \"163d325cd6da2ee0fbfbca8d7d20cb5e22a7027e36fe2fa667b751a210083409\" returns successfully" Oct 31 00:35:50.092144 containerd[1461]: time="2025-10-31T00:35:50.092115353Z" level=info msg="StopPodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\"" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.124 [WARNING][5206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4668b7ce-66a9-46c0-aadc-2d2b9be34740", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631", Pod:"coredns-668d6bf9bc-xb6ph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b025e83842", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.125 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.125 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" iface="eth0" netns="" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.125 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.125 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.147 [INFO][5215] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.147 [INFO][5215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.147 [INFO][5215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.154 [WARNING][5215] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.154 [INFO][5215] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.155 [INFO][5215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.162117 containerd[1461]: 2025-10-31 00:35:50.159 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.162117 containerd[1461]: time="2025-10-31T00:35:50.162058857Z" level=info msg="TearDown network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" successfully" Oct 31 00:35:50.162117 containerd[1461]: time="2025-10-31T00:35:50.162086551Z" level=info msg="StopPodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" returns successfully" Oct 31 00:35:50.162791 containerd[1461]: time="2025-10-31T00:35:50.162752580Z" level=info msg="RemovePodSandbox for \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\"" Oct 31 00:35:50.162837 containerd[1461]: time="2025-10-31T00:35:50.162791285Z" level=info msg="Forcibly stopping sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\"" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.197 [WARNING][5234] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4668b7ce-66a9-46c0-aadc-2d2b9be34740", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df7881b56425a5b55121bb60c40f8c5ba61b7cbb5fd10f50aa14577378539631", Pod:"coredns-668d6bf9bc-xb6ph", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b025e83842", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.198 [INFO][5234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.198 [INFO][5234] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" iface="eth0" netns="" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.198 [INFO][5234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.198 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.220 [INFO][5242] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.220 [INFO][5242] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.220 [INFO][5242] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.227 [WARNING][5242] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.227 [INFO][5242] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" HandleID="k8s-pod-network.71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Workload="localhost-k8s-coredns--668d6bf9bc--xb6ph-eth0" Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.228 [INFO][5242] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.234816 containerd[1461]: 2025-10-31 00:35:50.231 [INFO][5234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6" Oct 31 00:35:50.235491 containerd[1461]: time="2025-10-31T00:35:50.234884830Z" level=info msg="TearDown network for sandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" successfully" Oct 31 00:35:50.274804 containerd[1461]: time="2025-10-31T00:35:50.274723149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:50.275216 containerd[1461]: time="2025-10-31T00:35:50.274813223Z" level=info msg="RemovePodSandbox \"71eb9fd6812c522db80576f7548d54514587be7b1342892e0333c0eecf02b9a6\" returns successfully" Oct 31 00:35:50.275350 containerd[1461]: time="2025-10-31T00:35:50.275322239Z" level=info msg="StopPodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\"" Oct 31 00:35:50.278694 containerd[1461]: time="2025-10-31T00:35:50.278661139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.313 [WARNING][5260] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e37cc948-78f1-4541-9003-234551988575", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3", Pod:"coredns-668d6bf9bc-9q6g7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d2752c8ecd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.313 [INFO][5260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.313 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" iface="eth0" netns="" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.313 [INFO][5260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.313 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.337 [INFO][5270] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.337 [INFO][5270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.337 [INFO][5270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.345 [WARNING][5270] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.345 [INFO][5270] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.347 [INFO][5270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.353731 containerd[1461]: 2025-10-31 00:35:50.350 [INFO][5260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.354227 containerd[1461]: time="2025-10-31T00:35:50.353765700Z" level=info msg="TearDown network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" successfully" Oct 31 00:35:50.354227 containerd[1461]: time="2025-10-31T00:35:50.353794456Z" level=info msg="StopPodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" returns successfully" Oct 31 00:35:50.356874 containerd[1461]: time="2025-10-31T00:35:50.356810523Z" level=info msg="RemovePodSandbox for \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\"" Oct 31 00:35:50.356874 containerd[1461]: time="2025-10-31T00:35:50.356869437Z" level=info msg="Forcibly stopping sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\"" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.401 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e37cc948-78f1-4541-9003-234551988575", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ab168fea667c469613a43717aa28d96fd18488c1000573698dd0719c01742c3", Pod:"coredns-668d6bf9bc-9q6g7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d2752c8ecd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.401 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.401 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" iface="eth0" netns="" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.401 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.401 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.427 [INFO][5295] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.427 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.427 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.435 [WARNING][5295] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.435 [INFO][5295] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" HandleID="k8s-pod-network.38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Workload="localhost-k8s-coredns--668d6bf9bc--9q6g7-eth0" Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.437 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.445108 containerd[1461]: 2025-10-31 00:35:50.440 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140" Oct 31 00:35:50.445108 containerd[1461]: time="2025-10-31T00:35:50.444852198Z" level=info msg="TearDown network for sandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" successfully" Oct 31 00:35:50.450035 containerd[1461]: time="2025-10-31T00:35:50.449995681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:50.450138 containerd[1461]: time="2025-10-31T00:35:50.450064324Z" level=info msg="RemovePodSandbox \"38ff89359c216d3d11d93ce190f1b3ef0cc399d03651a3ac1dc2ef0c214f7140\" returns successfully" Oct 31 00:35:50.450723 containerd[1461]: time="2025-10-31T00:35:50.450668984Z" level=info msg="StopPodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\"" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.491 [WARNING][5312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwgh9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b2b437c-e155-49e7-bd08-33863840f302", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c", Pod:"csi-node-driver-hwgh9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6526a5c070a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.492 [INFO][5312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.492 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" iface="eth0" netns="" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.492 [INFO][5312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.492 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.521 [INFO][5321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.521 [INFO][5321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.521 [INFO][5321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.528 [WARNING][5321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.528 [INFO][5321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.529 [INFO][5321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.535747 containerd[1461]: 2025-10-31 00:35:50.532 [INFO][5312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.536392 containerd[1461]: time="2025-10-31T00:35:50.535800228Z" level=info msg="TearDown network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" successfully" Oct 31 00:35:50.536392 containerd[1461]: time="2025-10-31T00:35:50.535834764Z" level=info msg="StopPodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" returns successfully" Oct 31 00:35:50.536477 containerd[1461]: time="2025-10-31T00:35:50.536444585Z" level=info msg="RemovePodSandbox for \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\"" Oct 31 00:35:50.536509 containerd[1461]: time="2025-10-31T00:35:50.536484241Z" level=info msg="Forcibly stopping sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\"" Oct 31 00:35:50.603128 containerd[1461]: time="2025-10-31T00:35:50.603077463Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:50.697573 containerd[1461]: time="2025-10-31T00:35:50.697387929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:35:50.697573 containerd[1461]: time="2025-10-31T00:35:50.697471090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:35:50.698224 kubelet[2505]: E1031 00:35:50.697866 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:50.698224 kubelet[2505]: E1031 00:35:50.697938 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:50.698567 kubelet[2505]: E1031 00:35:50.698466 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db27p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65865f79c6-hslc5_calico-apiserver(cee64e5a-057c-4a2f-b352-eb76f50e925c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:50.698744 containerd[1461]: time="2025-10-31T00:35:50.698576188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:35:50.700015 kubelet[2505]: E1031 00:35:50.699954 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.576 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hwgh9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7b2b437c-e155-49e7-bd08-33863840f302", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ceb491a349f5b955cfc597a7fb55013f2347d7955657442bfadfed4be5ad8c", Pod:"csi-node-driver-hwgh9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6526a5c070a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.576 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.576 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" iface="eth0" netns="" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.576 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.576 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.602 [INFO][5347] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.602 [INFO][5347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.602 [INFO][5347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.714 [WARNING][5347] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.714 [INFO][5347] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" HandleID="k8s-pod-network.2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Workload="localhost-k8s-csi--node--driver--hwgh9-eth0" Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.748 [INFO][5347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.756954 containerd[1461]: 2025-10-31 00:35:50.751 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794" Oct 31 00:35:50.756954 containerd[1461]: time="2025-10-31T00:35:50.755126967Z" level=info msg="TearDown network for sandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" successfully" Oct 31 00:35:50.811730 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:40826.service - OpenSSH per-connection server daemon (10.0.0.1:40826). Oct 31 00:35:50.833669 containerd[1461]: time="2025-10-31T00:35:50.833557895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:50.833834 containerd[1461]: time="2025-10-31T00:35:50.833701653Z" level=info msg="RemovePodSandbox \"2370f602707bc6892f4bd0894e1836ca8d0cc4e199c9ff723bc135a8af820794\" returns successfully" Oct 31 00:35:50.836640 containerd[1461]: time="2025-10-31T00:35:50.836545296Z" level=info msg="StopPodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\"" Oct 31 00:35:50.857529 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 40826 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:50.860004 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:50.866550 systemd-logind[1446]: New session 14 of user core. Oct 31 00:35:50.870917 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.883 [WARNING][5367] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" WorkloadEndpoint="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.884 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.884 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" iface="eth0" netns="" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.884 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.884 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.914 [INFO][5376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.914 [INFO][5376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.914 [INFO][5376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.923 [WARNING][5376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.923 [INFO][5376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.925 [INFO][5376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:50.932889 containerd[1461]: 2025-10-31 00:35:50.929 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:50.933347 containerd[1461]: time="2025-10-31T00:35:50.932956467Z" level=info msg="TearDown network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" successfully" Oct 31 00:35:50.933347 containerd[1461]: time="2025-10-31T00:35:50.932991975Z" level=info msg="StopPodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" returns successfully" Oct 31 00:35:50.933834 containerd[1461]: time="2025-10-31T00:35:50.933809417Z" level=info msg="RemovePodSandbox for \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\"" Oct 31 00:35:50.933885 containerd[1461]: time="2025-10-31T00:35:50.933843914Z" level=info msg="Forcibly stopping sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\"" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.019 [WARNING][5396] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" WorkloadEndpoint="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.019 [INFO][5396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.019 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" iface="eth0" netns="" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.019 [INFO][5396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.019 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.046 [INFO][5411] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.046 [INFO][5411] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.046 [INFO][5411] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.053 [WARNING][5411] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.053 [INFO][5411] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" HandleID="k8s-pod-network.1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Workload="localhost-k8s-whisker--f49d5b744--rm846-eth0" Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.056 [INFO][5411] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:51.062045 containerd[1461]: 2025-10-31 00:35:51.059 [INFO][5396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989" Oct 31 00:35:51.062448 containerd[1461]: time="2025-10-31T00:35:51.062073845Z" level=info msg="TearDown network for sandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" successfully" Oct 31 00:35:51.100589 containerd[1461]: time="2025-10-31T00:35:51.100515489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:51.100589 containerd[1461]: time="2025-10-31T00:35:51.100627676Z" level=info msg="RemovePodSandbox \"1d137d4d8d018f0dde2f2c06ea5498497109c3e3cecce032d588fcc6412dc989\" returns successfully" Oct 31 00:35:51.101685 containerd[1461]: time="2025-10-31T00:35:51.101261942Z" level=info msg="StopPodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\"" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.195 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--w9csd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4eb1f60-118e-45dc-a64e-c81dd9882514", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb", Pod:"goldmane-666569f655-w9csd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali306e55d6f29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.195 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.195 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" iface="eth0" netns="" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.195 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.195 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.219 [INFO][5438] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.220 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.220 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.228 [WARNING][5438] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.228 [INFO][5438] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.230 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:51.237348 containerd[1461]: 2025-10-31 00:35:51.233 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.237988 containerd[1461]: time="2025-10-31T00:35:51.237401697Z" level=info msg="TearDown network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" successfully" Oct 31 00:35:51.237988 containerd[1461]: time="2025-10-31T00:35:51.237443908Z" level=info msg="StopPodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" returns successfully" Oct 31 00:35:51.238139 containerd[1461]: time="2025-10-31T00:35:51.238107862Z" level=info msg="RemovePodSandbox for \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\"" Oct 31 00:35:51.238194 containerd[1461]: time="2025-10-31T00:35:51.238148651Z" level=info msg="Forcibly stopping sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\"" Oct 31 00:35:51.238316 sshd[5355]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:51.243780 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:40826.service: Deactivated successfully. Oct 31 00:35:51.246215 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:35:51.247247 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:35:51.248301 systemd-logind[1446]: Removed session 14. Oct 31 00:35:51.280408 containerd[1461]: time="2025-10-31T00:35:51.280340074Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:51.282514 containerd[1461]: time="2025-10-31T00:35:51.282214298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:35:51.282514 containerd[1461]: time="2025-10-31T00:35:51.282471916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:35:51.283260 kubelet[2505]: E1031 00:35:51.282589 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:51.283260 kubelet[2505]: E1031 00:35:51.282652 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:35:51.283621 kubelet[2505]: E1031 00:35:51.283273 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dfr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65865f79c6-scbt9_calico-apiserver(34cdfd35-dce3-49cb-bd9c-4e5cde095d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:51.283717 containerd[1461]: time="2025-10-31T00:35:51.283655925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:35:51.286573 kubelet[2505]: E1031 00:35:51.284423 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.280 [WARNING][5457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--w9csd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e4eb1f60-118e-45dc-a64e-c81dd9882514", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af3e9f623356297aded687257540d04c595e41965e626ec1807f2c617c8355bb", Pod:"goldmane-666569f655-w9csd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali306e55d6f29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.280 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.280 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" iface="eth0" netns="" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.280 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.280 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.318 [INFO][5467] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.318 [INFO][5467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.318 [INFO][5467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.325 [WARNING][5467] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.325 [INFO][5467] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" HandleID="k8s-pod-network.c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Workload="localhost-k8s-goldmane--666569f655--w9csd-eth0" Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.326 [INFO][5467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:51.334051 containerd[1461]: 2025-10-31 00:35:51.331 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961" Oct 31 00:35:51.334051 containerd[1461]: time="2025-10-31T00:35:51.334000743Z" level=info msg="TearDown network for sandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" successfully" Oct 31 00:35:51.338994 containerd[1461]: time="2025-10-31T00:35:51.338922308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:51.338994 containerd[1461]: time="2025-10-31T00:35:51.338977825Z" level=info msg="RemovePodSandbox \"c9df8eaaf859a572c310b4df63b2e60f21b5f028a69be67861e73e32c4d6a961\" returns successfully" Oct 31 00:35:51.339530 containerd[1461]: time="2025-10-31T00:35:51.339503562Z" level=info msg="StopPodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\"" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.386 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"cee64e5a-057c-4a2f-b352-eb76f50e925c", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca", Pod:"calico-apiserver-65865f79c6-hslc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a22e4140b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.386 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.386 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" iface="eth0" netns="" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.386 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.386 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.419 [INFO][5494] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.419 [INFO][5494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.419 [INFO][5494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.429 [WARNING][5494] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.429 [INFO][5494] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.432 [INFO][5494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:51.438152 containerd[1461]: 2025-10-31 00:35:51.435 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.438926 containerd[1461]: time="2025-10-31T00:35:51.438847905Z" level=info msg="TearDown network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" successfully" Oct 31 00:35:51.438926 containerd[1461]: time="2025-10-31T00:35:51.438898203Z" level=info msg="StopPodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" returns successfully" Oct 31 00:35:51.439722 containerd[1461]: time="2025-10-31T00:35:51.439576665Z" level=info msg="RemovePodSandbox for \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\"" Oct 31 00:35:51.439722 containerd[1461]: time="2025-10-31T00:35:51.439642382Z" level=info msg="Forcibly stopping sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\"" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.477 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0", GenerateName:"calico-apiserver-65865f79c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"cee64e5a-057c-4a2f-b352-eb76f50e925c", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 35, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65865f79c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"23716e478549106a0915ca0b7d224944f5101ae8d4e420adb44b3bc6a4a002ca", Pod:"calico-apiserver-65865f79c6-hslc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0a22e4140b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.477 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.477 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" iface="eth0" netns="" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.477 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.477 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.508 [INFO][5519] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.509 [INFO][5519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.509 [INFO][5519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.520 [WARNING][5519] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.520 [INFO][5519] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" HandleID="k8s-pod-network.9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Workload="localhost-k8s-calico--apiserver--65865f79c6--hslc5-eth0" Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.522 [INFO][5519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:35:51.534509 containerd[1461]: 2025-10-31 00:35:51.528 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603" Oct 31 00:35:51.535167 containerd[1461]: time="2025-10-31T00:35:51.534582020Z" level=info msg="TearDown network for sandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" successfully" Oct 31 00:35:51.541244 containerd[1461]: time="2025-10-31T00:35:51.541119639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:35:51.541244 containerd[1461]: time="2025-10-31T00:35:51.541208913Z" level=info msg="RemovePodSandbox \"9b962dc2210f8667447b009ed76d334a4eb8fa864730cb6da50c485a5410e603\" returns successfully" Oct 31 00:35:51.646084 containerd[1461]: time="2025-10-31T00:35:51.645888642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:51.648189 containerd[1461]: time="2025-10-31T00:35:51.648131027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:35:51.648476 containerd[1461]: time="2025-10-31T00:35:51.648252983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:35:51.648516 kubelet[2505]: E1031 00:35:51.648449 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:35:51.648576 kubelet[2505]: E1031 00:35:51.648517 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:35:51.649164 kubelet[2505]: E1031 00:35:51.648849 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrdvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f5ddbf58-crgcv_calico-system(7d0df6fc-c714-44ff-8fdd-63dc2197c8ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:51.649333 containerd[1461]: time="2025-10-31T00:35:51.649006049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:35:51.650173 kubelet[2505]: E1031 00:35:51.650137 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:35:51.968185 containerd[1461]: time="2025-10-31T00:35:51.967952875Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:52.013273 containerd[1461]: time="2025-10-31T00:35:52.013139460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:35:52.013437 containerd[1461]: time="2025-10-31T00:35:52.013225918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:35:52.013646 kubelet[2505]: E1031 00:35:52.013568 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:35:52.013741 kubelet[2505]: E1031 00:35:52.013651 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:35:52.013862 kubelet[2505]: E1031 00:35:52.013808 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:52.016077 containerd[1461]: time="2025-10-31T00:35:52.016006158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:35:52.360866 containerd[1461]: time="2025-10-31T00:35:52.360783108Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:35:52.362486 containerd[1461]: time="2025-10-31T00:35:52.362404129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:35:52.362545 containerd[1461]: time="2025-10-31T00:35:52.362479164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:35:52.362853 kubelet[2505]: E1031 00:35:52.362785 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:35:52.362853 kubelet[2505]: E1031 00:35:52.362850 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:35:52.363400 kubelet[2505]: E1031 00:35:52.362995 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:35:52.364289 kubelet[2505]: E1031 00:35:52.364210 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:35:56.047024 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:40838.service - OpenSSH per-connection server daemon (10.0.0.1:40838). Oct 31 00:35:56.084937 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 40838 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:35:56.087328 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:35:56.092895 systemd-logind[1446]: New session 15 of user core. Oct 31 00:35:56.098783 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:35:56.223183 sshd[5538]: pam_unix(sshd:session): session closed for user core Oct 31 00:35:56.228558 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:40838.service: Deactivated successfully. Oct 31 00:35:56.230840 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:35:56.231631 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:35:56.232779 systemd-logind[1446]: Removed session 15. Oct 31 00:35:59.278368 kubelet[2505]: E1031 00:35:59.278301 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:36:00.279151 kubelet[2505]: E1031 00:36:00.279093 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-896688878-jrzt2" podUID="3ef5500a-a708-4695-baa1-1af98ae528f8" Oct 31 00:36:01.250983 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:57174.service - OpenSSH per-connection server daemon (10.0.0.1:57174). Oct 31 00:36:01.280040 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 57174 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:01.282064 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:01.286245 systemd-logind[1446]: New session 16 of user core. Oct 31 00:36:01.293758 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:36:01.401103 sshd[5557]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:01.405169 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:57174.service: Deactivated successfully. Oct 31 00:36:01.407149 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:36:01.410157 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:36:01.411297 systemd-logind[1446]: Removed session 16. Oct 31 00:36:03.280094 kubelet[2505]: E1031 00:36:03.280024 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:36:04.278523 kubelet[2505]: E1031 00:36:04.278462 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:36:04.616722 kubelet[2505]: E1031 00:36:04.616276 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:36:05.278578 kubelet[2505]: E1031 00:36:05.278475 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:36:06.278338 kubelet[2505]: E1031 00:36:06.278280 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:36:06.279726 kubelet[2505]: E1031 00:36:06.279666 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:36:06.420127 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:57186.service - OpenSSH per-connection server daemon (10.0.0.1:57186). Oct 31 00:36:06.466943 sshd[5596]: Accepted publickey for core from 10.0.0.1 port 57186 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:06.469745 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:06.475676 systemd-logind[1446]: New session 17 of user core. Oct 31 00:36:06.485889 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:36:06.638241 sshd[5596]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:06.643150 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:57186.service: Deactivated successfully. Oct 31 00:36:06.645353 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:36:06.646139 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:36:06.647320 systemd-logind[1446]: Removed session 17. Oct 31 00:36:11.280288 containerd[1461]: time="2025-10-31T00:36:11.280044069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:36:11.628021 containerd[1461]: time="2025-10-31T00:36:11.627849630Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:11.629218 containerd[1461]: time="2025-10-31T00:36:11.629153390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:36:11.629421 containerd[1461]: time="2025-10-31T00:36:11.629200479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:36:11.629484 kubelet[2505]: E1031 00:36:11.629441 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:36:11.630036 kubelet[2505]: E1031 00:36:11.629497 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:36:11.630036 kubelet[2505]: E1031 00:36:11.629659 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2e67ff6c39ea42bba590dd40441d38de,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-896688878-jrzt2_calico-system(3ef5500a-a708-4695-baa1-1af98ae528f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:11.631669 containerd[1461]: time="2025-10-31T00:36:11.631641020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:36:11.655701 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:54176.service - OpenSSH per-connection server daemon (10.0.0.1:54176). Oct 31 00:36:11.697427 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 54176 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:11.699369 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:11.704208 systemd-logind[1446]: New session 18 of user core. Oct 31 00:36:11.718888 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:36:11.850640 sshd[5612]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:11.859076 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:54176.service: Deactivated successfully. Oct 31 00:36:11.861311 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:36:11.863369 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:36:11.870352 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:54192.service - OpenSSH per-connection server daemon (10.0.0.1:54192). Oct 31 00:36:11.871575 systemd-logind[1446]: Removed session 18. Oct 31 00:36:11.907835 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 54192 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:11.909767 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:11.914610 systemd-logind[1446]: New session 19 of user core. Oct 31 00:36:11.918762 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:36:11.988634 containerd[1461]: time="2025-10-31T00:36:11.988530677Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:11.993625 containerd[1461]: time="2025-10-31T00:36:11.993497161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:36:11.994626 containerd[1461]: time="2025-10-31T00:36:11.993594476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:36:11.994709 kubelet[2505]: E1031 00:36:11.994060 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:36:11.994709 kubelet[2505]: E1031 00:36:11.994121 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:36:11.994709 kubelet[2505]: E1031 00:36:11.994238 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-896688878-jrzt2_calico-system(3ef5500a-a708-4695-baa1-1af98ae528f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:11.995785 kubelet[2505]: E1031 00:36:11.995753 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-896688878-jrzt2" podUID="3ef5500a-a708-4695-baa1-1af98ae528f8" Oct 31 00:36:12.374240 sshd[5626]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:12.382208 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:54192.service: Deactivated successfully. Oct 31 00:36:12.384397 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:36:12.385224 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:36:12.391899 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:54196.service - OpenSSH per-connection server daemon (10.0.0.1:54196). Oct 31 00:36:12.393625 systemd-logind[1446]: Removed session 19. Oct 31 00:36:12.431241 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 54196 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:12.433357 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:12.438280 systemd-logind[1446]: New session 20 of user core. Oct 31 00:36:12.443881 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:36:13.077283 sshd[5639]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:13.092219 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:54196.service: Deactivated successfully. Oct 31 00:36:13.095525 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:36:13.100394 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:36:13.102590 systemd-logind[1446]: Removed session 20. Oct 31 00:36:13.112015 systemd[1]: Started sshd@20-10.0.0.31:22-10.0.0.1:54210.service - OpenSSH per-connection server daemon (10.0.0.1:54210). Oct 31 00:36:13.145809 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:13.147625 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:13.152067 systemd-logind[1446]: New session 21 of user core. Oct 31 00:36:13.166833 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:36:13.405913 sshd[5658]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:13.418430 systemd[1]: sshd@20-10.0.0.31:22-10.0.0.1:54210.service: Deactivated successfully. Oct 31 00:36:13.420867 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:36:13.422895 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:36:13.438974 systemd[1]: Started sshd@21-10.0.0.31:22-10.0.0.1:54212.service - OpenSSH per-connection server daemon (10.0.0.1:54212). Oct 31 00:36:13.440366 systemd-logind[1446]: Removed session 21. Oct 31 00:36:13.474591 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:13.475365 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:13.480470 systemd-logind[1446]: New session 22 of user core. Oct 31 00:36:13.491082 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:36:13.611930 sshd[5671]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:13.616415 systemd[1]: sshd@21-10.0.0.31:22-10.0.0.1:54212.service: Deactivated successfully. Oct 31 00:36:13.618883 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:36:13.619585 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:36:13.620805 systemd-logind[1446]: Removed session 22. Oct 31 00:36:16.279362 containerd[1461]: time="2025-10-31T00:36:16.279305764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:36:16.744356 containerd[1461]: time="2025-10-31T00:36:16.743987323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:16.866849 containerd[1461]: time="2025-10-31T00:36:16.866754675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:36:16.867010 containerd[1461]: time="2025-10-31T00:36:16.866873581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:36:16.867129 kubelet[2505]: E1031 00:36:16.867067 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:36:16.867583 kubelet[2505]: E1031 00:36:16.867147 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:36:16.867583 kubelet[2505]: E1031 00:36:16.867328 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrdvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86f5ddbf58-crgcv_calico-system(7d0df6fc-c714-44ff-8fdd-63dc2197c8ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:16.868880 kubelet[2505]: E1031 00:36:16.868839 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:36:17.281439 containerd[1461]: time="2025-10-31T00:36:17.281131845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:36:17.657381 containerd[1461]: time="2025-10-31T00:36:17.657209564Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:17.701058 containerd[1461]: time="2025-10-31T00:36:17.700964671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:36:17.701058 containerd[1461]: time="2025-10-31T00:36:17.701021660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:36:17.701451 kubelet[2505]: E1031 00:36:17.701353 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:36:17.701451 kubelet[2505]: E1031 00:36:17.701420 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:36:17.701878 containerd[1461]: time="2025-10-31T00:36:17.701840239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:36:17.702112 kubelet[2505]: E1031 00:36:17.701865 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:17.998917 containerd[1461]: time="2025-10-31T00:36:17.998439053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:18.001728 containerd[1461]: time="2025-10-31T00:36:18.001657474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:36:18.001897 containerd[1461]: time="2025-10-31T00:36:18.001787230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:36:18.002104 kubelet[2505]: E1031 00:36:18.002018 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:36:18.002104 kubelet[2505]: E1031 00:36:18.002087 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:36:18.003017 kubelet[2505]: E1031 00:36:18.002307 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dfr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65865f79c6-scbt9_calico-apiserver(34cdfd35-dce3-49cb-bd9c-4e5cde095d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:18.003147 containerd[1461]: time="2025-10-31T00:36:18.002537068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:36:18.003639 kubelet[2505]: E1031 00:36:18.003589 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:36:18.278460 kubelet[2505]: E1031 00:36:18.278208 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:36:18.341573 containerd[1461]: time="2025-10-31T00:36:18.341509823Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:18.429197 containerd[1461]: time="2025-10-31T00:36:18.429132239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:36:18.429391 containerd[1461]: time="2025-10-31T00:36:18.429155523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:36:18.429441 kubelet[2505]: E1031 00:36:18.429407 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:36:18.429490 kubelet[2505]: E1031 00:36:18.429455 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:36:18.429797 kubelet[2505]: E1031 00:36:18.429739 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-w9csd_calico-system(e4eb1f60-118e-45dc-a64e-c81dd9882514): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:18.429932 containerd[1461]: time="2025-10-31T00:36:18.429839466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:36:18.432019 kubelet[2505]: E1031 00:36:18.431812 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:36:18.633058 systemd[1]: Started sshd@22-10.0.0.31:22-10.0.0.1:54218.service - OpenSSH per-connection server daemon (10.0.0.1:54218). Oct 31 00:36:18.671314 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 54218 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:18.673352 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:18.679502 systemd-logind[1446]: New session 23 of user core. Oct 31 00:36:18.685072 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:36:18.820109 containerd[1461]: time="2025-10-31T00:36:18.820022562Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:18.830459 containerd[1461]: time="2025-10-31T00:36:18.829261915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:36:18.830459 containerd[1461]: time="2025-10-31T00:36:18.829402913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:36:18.830731 kubelet[2505]: E1031 00:36:18.829698 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:36:18.830731 kubelet[2505]: E1031 00:36:18.829777 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:36:18.830731 kubelet[2505]: E1031 00:36:18.830049 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xvpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hwgh9_calico-system(7b2b437c-e155-49e7-bd08-33863840f302): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:18.832324 containerd[1461]: time="2025-10-31T00:36:18.831905679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:36:18.833423 kubelet[2505]: E1031 00:36:18.833274 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:36:18.861952 sshd[5694]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:18.870914 systemd[1]: sshd@22-10.0.0.31:22-10.0.0.1:54218.service: Deactivated successfully. Oct 31 00:36:18.871145 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:36:18.876516 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:36:18.882392 systemd-logind[1446]: Removed session 23. Oct 31 00:36:19.165138 containerd[1461]: time="2025-10-31T00:36:19.164876654Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:36:19.166594 containerd[1461]: time="2025-10-31T00:36:19.166480878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:36:19.166751 containerd[1461]: time="2025-10-31T00:36:19.166544720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:36:19.166979 kubelet[2505]: E1031 00:36:19.166914 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:36:19.167462 kubelet[2505]: E1031 00:36:19.166984 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:36:19.167462 kubelet[2505]: E1031 00:36:19.167169 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db27p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65865f79c6-hslc5_calico-apiserver(cee64e5a-057c-4a2f-b352-eb76f50e925c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:36:19.168478 kubelet[2505]: E1031 00:36:19.168427 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:36:23.878276 systemd[1]: Started sshd@23-10.0.0.31:22-10.0.0.1:50370.service - OpenSSH per-connection server daemon (10.0.0.1:50370). Oct 31 00:36:23.933463 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:23.935977 sshd[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:23.941624 systemd-logind[1446]: New session 24 of user core. Oct 31 00:36:23.946867 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:36:24.106700 sshd[5711]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:24.111310 systemd[1]: sshd@23-10.0.0.31:22-10.0.0.1:50370.service: Deactivated successfully. Oct 31 00:36:24.114245 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:36:24.115096 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:36:24.116163 systemd-logind[1446]: Removed session 24. Oct 31 00:36:24.278327 kubelet[2505]: E1031 00:36:24.278291 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:36:27.279824 kubelet[2505]: E1031 00:36:27.279738 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-896688878-jrzt2" podUID="3ef5500a-a708-4695-baa1-1af98ae528f8" Oct 31 00:36:29.126283 systemd[1]: Started sshd@24-10.0.0.31:22-10.0.0.1:50372.service - OpenSSH per-connection server daemon (10.0.0.1:50372). Oct 31 00:36:29.186882 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 50372 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:29.189239 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:29.195331 systemd-logind[1446]: New session 25 of user core. Oct 31 00:36:29.202765 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:36:29.279856 kubelet[2505]: E1031 00:36:29.279428 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:36:29.367216 sshd[5727]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:29.375291 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:36:29.377537 systemd[1]: sshd@24-10.0.0.31:22-10.0.0.1:50372.service: Deactivated successfully. Oct 31 00:36:29.383121 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:36:29.387748 systemd-logind[1446]: Removed session 25. Oct 31 00:36:30.278910 kubelet[2505]: E1031 00:36:30.278854 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-w9csd" podUID="e4eb1f60-118e-45dc-a64e-c81dd9882514" Oct 31 00:36:30.279138 kubelet[2505]: E1031 00:36:30.278924 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-scbt9" podUID="34cdfd35-dce3-49cb-bd9c-4e5cde095d40" Oct 31 00:36:30.279739 kubelet[2505]: E1031 00:36:30.279574 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hwgh9" podUID="7b2b437c-e155-49e7-bd08-33863840f302" Oct 31 00:36:31.278539 kubelet[2505]: E1031 00:36:31.278476 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65865f79c6-hslc5" podUID="cee64e5a-057c-4a2f-b352-eb76f50e925c" Oct 31 00:36:32.281573 kubelet[2505]: E1031 00:36:32.281487 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86f5ddbf58-crgcv" podUID="7d0df6fc-c714-44ff-8fdd-63dc2197c8ef" Oct 31 00:36:34.278395 kubelet[2505]: E1031 00:36:34.278334 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:36:34.392963 systemd[1]: Started sshd@25-10.0.0.31:22-10.0.0.1:58054.service - OpenSSH per-connection server daemon (10.0.0.1:58054). Oct 31 00:36:34.438992 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 58054 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:36:34.441276 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:36:34.447308 systemd-logind[1446]: New session 26 of user core. Oct 31 00:36:34.459916 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 31 00:36:34.562362 systemd[1]: run-containerd-runc-k8s.io-8b240408f8035f1b89365be8d55dd4b7da9634bd01b7892aac14e865982ec39a-runc.twnBi2.mount: Deactivated successfully. Oct 31 00:36:34.628220 sshd[5741]: pam_unix(sshd:session): session closed for user core Oct 31 00:36:34.642930 systemd[1]: sshd@25-10.0.0.31:22-10.0.0.1:58054.service: Deactivated successfully. Oct 31 00:36:34.647234 systemd[1]: session-26.scope: Deactivated successfully. Oct 31 00:36:34.649487 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Oct 31 00:36:34.651364 systemd-logind[1446]: Removed session 26.