Nov 1 00:16:45.102507 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:16:45.102533 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:16:45.102547 kernel: BIOS-provided physical RAM map: Nov 1 00:16:45.102555 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:16:45.102561 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:16:45.102568 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:16:45.102575 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 1 00:16:45.102582 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 1 00:16:45.102588 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:16:45.102597 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:16:45.102604 kernel: NX (Execute Disable) protection: active Nov 1 00:16:45.102610 kernel: APIC: Static calls initialized Nov 1 00:16:45.102621 kernel: SMBIOS 2.8 present. Nov 1 00:16:45.102654 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 1 00:16:45.102662 kernel: Hypervisor detected: KVM Nov 1 00:16:45.102672 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:16:45.102683 kernel: kvm-clock: using sched offset of 3609867902 cycles Nov 1 00:16:45.102691 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:16:45.102698 kernel: tsc: Detected 1999.999 MHz processor Nov 1 00:16:45.102705 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:16:45.102713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:16:45.102720 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 1 00:16:45.102727 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:16:45.102734 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:16:45.102744 kernel: ACPI: Early table checksum verification disabled Nov 1 00:16:45.102750 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 1 00:16:45.102757 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102764 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102771 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102778 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:16:45.102785 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102792 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102799 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102808 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:16:45.102815 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 1 00:16:45.102822 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 1 00:16:45.102829 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:16:45.102835 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 1 00:16:45.102842 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 1 00:16:45.102849 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 1 00:16:45.102862 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 1 00:16:45.102870 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:16:45.102877 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:16:45.102884 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:16:45.102891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:16:45.102902 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 1 00:16:45.102910 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 1 00:16:45.102920 kernel: Zone ranges: Nov 1 00:16:45.102927 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:16:45.102934 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 1 00:16:45.102941 kernel: Normal empty Nov 1 00:16:45.102949 kernel: Movable zone start for each node Nov 1 00:16:45.102956 kernel: Early memory node ranges Nov 1 00:16:45.102963 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:16:45.102970 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 1 00:16:45.102978 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 1 00:16:45.102987 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:16:45.102995 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:16:45.103005 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 1 00:16:45.103013 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:16:45.103020 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:16:45.103027 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:16:45.103034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:16:45.103041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:16:45.103049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:16:45.103058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:16:45.103066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:16:45.103073 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:16:45.103080 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:16:45.103087 kernel: TSC deadline timer available Nov 1 00:16:45.103095 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:16:45.103102 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:16:45.103109 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 1 00:16:45.103120 kernel: Booting paravirtualized kernel on KVM Nov 1 00:16:45.103127 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:16:45.103138 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:16:45.103145 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:16:45.103152 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:16:45.103160 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:16:45.103167 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:16:45.103175 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:16:45.103183 kernel: random: crng init done Nov 1 00:16:45.103190 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:16:45.103200 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:16:45.103207 kernel: Fallback order for Node 0: 0 Nov 1 00:16:45.103214 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 1 00:16:45.103222 kernel: Policy zone: DMA32 Nov 1 00:16:45.103229 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:16:45.103237 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 125148K reserved, 0K cma-reserved) Nov 1 00:16:45.103244 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:16:45.103251 kernel: Kernel/User page tables isolation: enabled Nov 1 00:16:45.103261 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:16:45.103268 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:16:45.103276 kernel: Dynamic Preempt: voluntary Nov 1 00:16:45.103283 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:16:45.103291 kernel: rcu: RCU event tracing is enabled. Nov 1 00:16:45.103298 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:16:45.103306 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:16:45.103313 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:16:45.103320 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:16:45.103328 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:16:45.103338 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:16:45.103345 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:16:45.103352 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:16:45.103359 kernel: Console: colour VGA+ 80x25 Nov 1 00:16:45.103370 kernel: printk: console [tty0] enabled Nov 1 00:16:45.103377 kernel: printk: console [ttyS0] enabled Nov 1 00:16:45.103384 kernel: ACPI: Core revision 20230628 Nov 1 00:16:45.103392 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:16:45.103399 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:16:45.103409 kernel: x2apic enabled Nov 1 00:16:45.103416 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:16:45.103423 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:16:45.103430 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 1 00:16:45.103438 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Nov 1 00:16:45.103445 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:16:45.103452 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:16:45.103460 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:16:45.103479 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:16:45.103486 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:16:45.103494 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:16:45.103504 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:16:45.103512 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:16:45.103520 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:16:45.103528 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:16:45.103536 kernel: active return thunk: its_return_thunk Nov 1 00:16:45.103547 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:16:45.103559 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:16:45.103567 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:16:45.103574 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:16:45.103582 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:16:45.103590 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:16:45.103598 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:16:45.103606 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:16:45.103614 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:16:45.103638 kernel: landlock: Up and running. Nov 1 00:16:45.103646 kernel: SELinux: Initializing. Nov 1 00:16:45.103654 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:16:45.103662 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:16:45.103670 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 1 00:16:45.103678 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:16:45.103686 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:16:45.103694 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:16:45.103702 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 1 00:16:45.103713 kernel: signal: max sigframe size: 1776 Nov 1 00:16:45.103721 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:16:45.103729 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:16:45.103737 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:16:45.103745 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:16:45.103752 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:16:45.103760 kernel: .... node #0, CPUs: #1 Nov 1 00:16:45.103768 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:16:45.103780 kernel: smpboot: Max logical packages: 1 Nov 1 00:16:45.103791 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Nov 1 00:16:45.103799 kernel: devtmpfs: initialized Nov 1 00:16:45.103806 kernel: x86/mm: Memory block size: 128MB Nov 1 00:16:45.103814 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:16:45.103822 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:16:45.103830 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:16:45.103838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:16:45.103846 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:16:45.103854 kernel: audit: type=2000 audit(1761956203.494:1): state=initialized audit_enabled=0 res=1 Nov 1 00:16:45.103864 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:16:45.103872 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:16:45.103880 kernel: cpuidle: using governor menu Nov 1 00:16:45.103888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:16:45.103896 kernel: dca service started, version 1.12.1 Nov 1 00:16:45.103903 kernel: PCI: Using configuration type 1 for base access Nov 1 00:16:45.103911 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:16:45.103919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:16:45.103927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:16:45.103937 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:16:45.103945 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:16:45.103953 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:16:45.103961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:16:45.103969 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:16:45.103976 kernel: ACPI: Interpreter enabled Nov 1 00:16:45.103984 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:16:45.103992 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:16:45.104000 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:16:45.104010 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:16:45.104018 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:16:45.104026 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:16:45.104261 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:16:45.104379 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:16:45.104481 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:16:45.104492 kernel: acpiphp: Slot [3] registered Nov 1 00:16:45.104504 kernel: acpiphp: Slot [4] registered Nov 1 00:16:45.104512 kernel: acpiphp: Slot [5] registered Nov 1 00:16:45.104520 kernel: acpiphp: Slot [6] registered Nov 1 00:16:45.104528 kernel: acpiphp: Slot [7] registered Nov 1 00:16:45.104536 kernel: acpiphp: Slot [8] registered Nov 1 00:16:45.104544 kernel: acpiphp: Slot [9] registered Nov 1 00:16:45.104552 kernel: acpiphp: Slot [10] registered Nov 1 00:16:45.104560 kernel: acpiphp: Slot [11] registered Nov 1 00:16:45.104568 kernel: acpiphp: Slot [12] registered Nov 1 00:16:45.104579 kernel: acpiphp: Slot [13] registered Nov 1 00:16:45.104587 kernel: acpiphp: Slot [14] registered Nov 1 00:16:45.104594 kernel: acpiphp: Slot [15] registered Nov 1 00:16:45.104602 kernel: acpiphp: Slot [16] registered Nov 1 00:16:45.104610 kernel: acpiphp: Slot [17] registered Nov 1 00:16:45.104618 kernel: acpiphp: Slot [18] registered Nov 1 00:16:45.106754 kernel: acpiphp: Slot [19] registered Nov 1 00:16:45.106781 kernel: acpiphp: Slot [20] registered Nov 1 00:16:45.106794 kernel: acpiphp: Slot [21] registered Nov 1 00:16:45.106807 kernel: acpiphp: Slot [22] registered Nov 1 00:16:45.106830 kernel: acpiphp: Slot [23] registered Nov 1 00:16:45.106842 kernel: acpiphp: Slot [24] registered Nov 1 00:16:45.106850 kernel: acpiphp: Slot [25] registered Nov 1 00:16:45.106858 kernel: acpiphp: Slot [26] registered Nov 1 00:16:45.106866 kernel: acpiphp: Slot [27] registered Nov 1 00:16:45.106874 kernel: acpiphp: Slot [28] registered Nov 1 00:16:45.106883 kernel: acpiphp: Slot [29] registered Nov 1 00:16:45.106891 kernel: acpiphp: Slot [30] registered Nov 1 00:16:45.106898 kernel: acpiphp: Slot [31] registered Nov 1 00:16:45.106909 kernel: PCI host bridge to bus 0000:00 Nov 1 00:16:45.107095 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:16:45.107188 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:16:45.107277 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:16:45.107362 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:16:45.107449 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 1 00:16:45.107537 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:16:45.107681 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:16:45.107791 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:16:45.107901 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 1 00:16:45.107998 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 1 00:16:45.108096 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:16:45.108190 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:16:45.108293 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:16:45.108404 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:16:45.108516 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 1 00:16:45.108612 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 1 00:16:45.110848 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:16:45.110973 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 1 00:16:45.111076 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 1 00:16:45.111208 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:16:45.111307 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 1 00:16:45.111404 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 1 00:16:45.111500 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 1 00:16:45.111598 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 1 00:16:45.111712 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:16:45.111870 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:16:45.111982 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 1 00:16:45.112078 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 1 00:16:45.112176 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 1 00:16:45.112282 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:16:45.112379 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 1 00:16:45.112473 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 1 00:16:45.112575 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 1 00:16:45.112740 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:16:45.112840 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 1 00:16:45.112936 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 1 00:16:45.113030 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 1 00:16:45.113147 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:16:45.113243 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:16:45.113345 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 1 00:16:45.113438 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 1 00:16:45.113546 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:16:45.113725 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 1 00:16:45.113823 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 1 00:16:45.113919 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 1 00:16:45.114034 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 1 00:16:45.114158 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 1 00:16:45.114255 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 1 00:16:45.114265 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:16:45.114274 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:16:45.114283 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:16:45.114291 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:16:45.114299 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:16:45.114312 kernel: iommu: Default domain type: Translated Nov 1 00:16:45.114320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:16:45.114328 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:16:45.114337 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:16:45.114345 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:16:45.114353 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 1 00:16:45.114455 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 1 00:16:45.114554 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 1 00:16:45.115082 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:16:45.115107 kernel: vgaarb: loaded Nov 1 00:16:45.115116 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:16:45.115125 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:16:45.115134 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:16:45.115142 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:16:45.115151 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:16:45.115159 kernel: pnp: PnP ACPI init Nov 1 00:16:45.115167 kernel: pnp: PnP ACPI: found 4 devices Nov 1 00:16:45.115176 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:16:45.115188 kernel: NET: Registered PF_INET protocol family Nov 1 00:16:45.115196 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:16:45.115205 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:16:45.115213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:16:45.115222 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:16:45.115230 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:16:45.115239 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:16:45.115247 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:16:45.115256 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:16:45.115267 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:16:45.115275 kernel: NET: Registered PF_XDP protocol family Nov 1 00:16:45.115381 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:16:45.115470 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:16:45.115557 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:16:45.115715 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:16:45.115802 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 1 00:16:45.115927 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 1 00:16:45.116036 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:16:45.116049 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:16:45.116146 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 35106 usecs Nov 1 00:16:45.116158 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:16:45.116166 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:16:45.116175 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 1 00:16:45.116183 kernel: Initialise system trusted keyrings Nov 1 00:16:45.116192 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:16:45.116203 kernel: Key type asymmetric registered Nov 1 00:16:45.116211 kernel: Asymmetric key parser 'x509' registered Nov 1 00:16:45.116220 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:16:45.116228 kernel: io scheduler mq-deadline registered Nov 1 00:16:45.116236 kernel: io scheduler kyber registered Nov 1 00:16:45.116244 kernel: io scheduler bfq registered Nov 1 00:16:45.116252 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:16:45.116261 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 1 00:16:45.116269 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:16:45.116277 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:16:45.116288 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:16:45.116296 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:16:45.116305 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:16:45.116313 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:16:45.116321 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:16:45.116330 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:16:45.116461 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:16:45.116558 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:16:45.116725 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:16:44 UTC (1761956204) Nov 1 00:16:45.116836 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:16:45.116847 kernel: intel_pstate: CPU model not supported Nov 1 00:16:45.116855 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:16:45.116863 kernel: Segment Routing with IPv6 Nov 1 00:16:45.116872 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:16:45.116880 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:16:45.116888 kernel: Key type dns_resolver registered Nov 1 00:16:45.116909 kernel: IPI shorthand broadcast: enabled Nov 1 00:16:45.116924 kernel: sched_clock: Marking stable (1126003260, 235143171)->(1541367891, -180221460) Nov 1 00:16:45.116937 kernel: registered taskstats version 1 Nov 1 00:16:45.116950 kernel: Loading compiled-in X.509 certificates Nov 1 00:16:45.116964 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:16:45.116972 kernel: Key type .fscrypt registered Nov 1 00:16:45.116980 kernel: Key type fscrypt-provisioning registered Nov 1 00:16:45.116989 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:16:45.116997 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:16:45.117009 kernel: ima: No architecture policies found Nov 1 00:16:45.117017 kernel: clk: Disabling unused clocks Nov 1 00:16:45.117025 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:16:45.117034 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:16:45.117042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:16:45.117077 kernel: Run /init as init process Nov 1 00:16:45.117088 kernel: with arguments: Nov 1 00:16:45.117098 kernel: /init Nov 1 00:16:45.117106 kernel: with environment: Nov 1 00:16:45.117114 kernel: HOME=/ Nov 1 00:16:45.117125 kernel: TERM=linux Nov 1 00:16:45.117137 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:16:45.117149 systemd[1]: Detected virtualization kvm. Nov 1 00:16:45.117158 systemd[1]: Detected architecture x86-64. Nov 1 00:16:45.117167 systemd[1]: Running in initrd. Nov 1 00:16:45.117176 systemd[1]: No hostname configured, using default hostname. Nov 1 00:16:45.117184 systemd[1]: Hostname set to . Nov 1 00:16:45.117195 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:16:45.117204 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:16:45.117213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:16:45.117221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:16:45.117231 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:16:45.117240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:16:45.117249 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:16:45.117258 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:16:45.117272 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:16:45.117281 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:16:45.117289 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:16:45.117298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:16:45.117307 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:16:45.117315 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:16:45.117324 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:16:45.117336 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:16:45.117347 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:16:45.117356 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:16:45.117365 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:16:45.117374 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:16:45.117385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:16:45.117394 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:16:45.117403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:16:45.117412 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:16:45.117421 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:16:45.117430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:16:45.117441 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:16:45.117450 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:16:45.117459 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:16:45.117470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:16:45.117506 systemd-journald[185]: Collecting audit messages is disabled. Nov 1 00:16:45.117528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:45.117537 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:16:45.117549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:16:45.117559 systemd-journald[185]: Journal started Nov 1 00:16:45.117580 systemd-journald[185]: Runtime Journal (/run/log/journal/12aad9ec83334077838174407fb7dc79) is 4.9M, max 39.3M, 34.4M free. Nov 1 00:16:45.126700 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:16:45.138793 systemd-modules-load[186]: Inserted module 'overlay' Nov 1 00:16:45.217859 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:16:45.217891 kernel: Bridge firewalling registered Nov 1 00:16:45.185519 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 1 00:16:45.217324 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:16:45.219076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:16:45.220687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:45.231018 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:16:45.235938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:16:45.244364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:16:45.255900 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:16:45.262901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:16:45.268991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:16:45.276073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:16:45.278233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:16:45.287899 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:16:45.296033 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:16:45.307268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:16:45.309668 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:16:45.316693 dracut-cmdline[215]: dracut-dracut-053 Nov 1 00:16:45.324248 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:16:45.346570 systemd-resolved[222]: Positive Trust Anchors: Nov 1 00:16:45.346597 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:16:45.346683 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:16:45.352173 systemd-resolved[222]: Defaulting to hostname 'linux'. Nov 1 00:16:45.354385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:16:45.357239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:16:45.446729 kernel: SCSI subsystem initialized Nov 1 00:16:45.458700 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:16:45.473664 kernel: iscsi: registered transport (tcp) Nov 1 00:16:45.498813 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:16:45.498910 kernel: QLogic iSCSI HBA Driver Nov 1 00:16:45.547268 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:16:45.554876 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:16:45.586124 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:16:45.586209 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:16:45.588685 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:16:45.634720 kernel: raid6: avx2x4 gen() 30710 MB/s Nov 1 00:16:45.652694 kernel: raid6: avx2x2 gen() 30354 MB/s Nov 1 00:16:45.671696 kernel: raid6: avx2x1 gen() 23989 MB/s Nov 1 00:16:45.671785 kernel: raid6: using algorithm avx2x4 gen() 30710 MB/s Nov 1 00:16:45.691665 kernel: raid6: .... xor() 10295 MB/s, rmw enabled Nov 1 00:16:45.691746 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:16:45.716684 kernel: xor: automatically using best checksumming function avx Nov 1 00:16:45.881749 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:16:45.894172 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:16:45.900850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:16:45.916126 systemd-udevd[402]: Using default interface naming scheme 'v255'. Nov 1 00:16:45.920812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:16:45.928877 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:16:45.955657 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 1 00:16:45.989805 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:16:45.996884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:16:46.053752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:16:46.062825 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:16:46.084323 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:16:46.088255 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:16:46.089285 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:16:46.092064 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:16:46.099185 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:16:46.118892 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 1 00:16:46.127993 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:16:46.137498 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:16:46.137780 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:16:46.145650 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:16:46.188130 kernel: ACPI: bus type USB registered Nov 1 00:16:46.188214 kernel: usbcore: registered new interface driver usbfs Nov 1 00:16:46.190245 kernel: usbcore: registered new interface driver hub Nov 1 00:16:46.192202 kernel: usbcore: registered new device driver usb Nov 1 00:16:46.198660 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:16:46.206437 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:16:46.206500 kernel: GPT:9289727 != 125829119 Nov 1 00:16:46.206511 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:16:46.207912 kernel: GPT:9289727 != 125829119 Nov 1 00:16:46.209058 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:16:46.211012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:16:46.216649 kernel: AES CTR mode by8 optimization enabled Nov 1 00:16:46.227042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:16:46.227172 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:16:46.230381 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:16:46.231336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:16:46.231535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:46.233931 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:46.244063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:46.249647 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 1 00:16:46.249876 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 1 00:16:46.250001 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 1 00:16:46.250145 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 1 00:16:46.250266 kernel: hub 1-0:1.0: USB hub found Nov 1 00:16:46.250698 kernel: hub 1-0:1.0: 2 ports detected Nov 1 00:16:46.276025 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 1 00:16:46.281909 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 1 00:16:46.311660 kernel: libata version 3.00 loaded. Nov 1 00:16:46.339661 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (449) Nov 1 00:16:46.345787 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 1 00:16:46.348435 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:16:46.429799 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (447) Nov 1 00:16:46.429838 kernel: scsi host1: ata_piix Nov 1 00:16:46.430192 kernel: scsi host2: ata_piix Nov 1 00:16:46.430341 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 1 00:16:46.430355 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 1 00:16:46.431591 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:46.438075 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:16:46.446398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:16:46.450766 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:16:46.451700 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:16:46.459889 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:16:46.464857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:16:46.468550 disk-uuid[539]: Primary Header is updated. Nov 1 00:16:46.468550 disk-uuid[539]: Secondary Entries is updated. Nov 1 00:16:46.468550 disk-uuid[539]: Secondary Header is updated. Nov 1 00:16:46.475327 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:16:46.478756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:16:46.485670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:16:46.495911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:16:47.486524 disk-uuid[540]: The operation has completed successfully. Nov 1 00:16:47.487445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:16:47.527325 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:16:47.527456 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:16:47.541959 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:16:47.545831 sh[562]: Success Nov 1 00:16:47.561694 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:16:47.623331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:16:47.626798 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:16:47.628318 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:16:47.661730 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:16:47.661801 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:47.661813 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:16:47.664838 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:16:47.666733 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:16:47.675334 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:16:47.676812 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:16:47.681811 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:16:47.684788 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:16:47.696781 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:47.696823 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:47.698890 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:16:47.706826 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:16:47.717816 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:16:47.721039 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:47.725674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:16:47.730863 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:16:47.864050 ignition[647]: Ignition 2.19.0 Nov 1 00:16:47.864075 ignition[647]: Stage: fetch-offline Nov 1 00:16:47.866163 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:16:47.864128 ignition[647]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:47.864141 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:47.864247 ignition[647]: parsed url from cmdline: "" Nov 1 00:16:47.864252 ignition[647]: no config URL provided Nov 1 00:16:47.864258 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:16:47.864268 ignition[647]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:16:47.864274 ignition[647]: failed to fetch config: resource requires networking Nov 1 00:16:47.872893 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:16:47.864551 ignition[647]: Ignition finished successfully Nov 1 00:16:47.886558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:16:47.907336 systemd-networkd[752]: lo: Link UP Nov 1 00:16:47.907359 systemd-networkd[752]: lo: Gained carrier Nov 1 00:16:47.909669 systemd-networkd[752]: Enumeration completed Nov 1 00:16:47.910150 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 1 00:16:47.910154 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 1 00:16:47.911193 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:16:47.911790 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:16:47.911795 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:16:47.912448 systemd-networkd[752]: eth0: Link UP Nov 1 00:16:47.912452 systemd-networkd[752]: eth0: Gained carrier Nov 1 00:16:47.912461 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 1 00:16:47.912934 systemd[1]: Reached target network.target - Network. Nov 1 00:16:47.918106 systemd-networkd[752]: eth1: Link UP Nov 1 00:16:47.918111 systemd-networkd[752]: eth1: Gained carrier Nov 1 00:16:47.918124 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:16:47.921937 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:16:47.931761 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.26/20 acquired from 169.254.169.253 Nov 1 00:16:47.935727 systemd-networkd[752]: eth0: DHCPv4 address 146.190.126.63/20, gateway 146.190.112.1 acquired from 169.254.169.253 Nov 1 00:16:47.953025 ignition[755]: Ignition 2.19.0 Nov 1 00:16:47.953038 ignition[755]: Stage: fetch Nov 1 00:16:47.953289 ignition[755]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:47.953305 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:47.953435 ignition[755]: parsed url from cmdline: "" Nov 1 00:16:47.953439 ignition[755]: no config URL provided Nov 1 00:16:47.953445 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:16:47.953458 ignition[755]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:16:47.953484 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 1 00:16:47.985579 ignition[755]: GET result: OK Nov 1 00:16:47.986781 ignition[755]: parsing config with SHA512: 10263023f429fc320e7beaad30730f986ddf5cee5d29719a483d018a35c0524c505b7b1026ecfb0d3029cbd32818609782f3bb4cc938d63482089a95001e8313 Nov 1 00:16:47.992191 unknown[755]: fetched base config from "system" Nov 1 00:16:47.992205 unknown[755]: fetched base config from "system" Nov 1 00:16:47.992792 ignition[755]: fetch: fetch complete Nov 1 00:16:47.992211 unknown[755]: fetched user config from "digitalocean" Nov 1 00:16:47.992799 ignition[755]: fetch: fetch passed Nov 1 00:16:47.995172 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:16:47.992870 ignition[755]: Ignition finished successfully Nov 1 00:16:48.002903 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:16:48.021982 ignition[763]: Ignition 2.19.0 Nov 1 00:16:48.021994 ignition[763]: Stage: kargs Nov 1 00:16:48.022313 ignition[763]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:48.022326 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:48.025598 ignition[763]: kargs: kargs passed Nov 1 00:16:48.025678 ignition[763]: Ignition finished successfully Nov 1 00:16:48.027481 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:16:48.036907 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:16:48.059824 ignition[771]: Ignition 2.19.0 Nov 1 00:16:48.059843 ignition[771]: Stage: disks Nov 1 00:16:48.060106 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:48.062492 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:16:48.060123 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:48.063906 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:16:48.061369 ignition[771]: disks: disks passed Nov 1 00:16:48.064620 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:16:48.061425 ignition[771]: Ignition finished successfully Nov 1 00:16:48.065477 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:16:48.073520 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:16:48.074878 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:16:48.083903 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:16:48.098488 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:16:48.101395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:16:48.113886 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:16:48.224735 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:16:48.225592 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:16:48.227044 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:16:48.239825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:16:48.244398 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:16:48.250980 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 1 00:16:48.254933 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (787) Nov 1 00:16:48.258684 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:48.258734 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:48.258747 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:16:48.270686 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:16:48.279242 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:16:48.281953 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:16:48.281995 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:16:48.284531 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:16:48.289306 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:16:48.301170 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:16:48.389677 coreos-metadata[789]: Nov 01 00:16:48.388 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:16:48.394959 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:16:48.396497 coreos-metadata[805]: Nov 01 00:16:48.396 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:16:48.403264 coreos-metadata[789]: Nov 01 00:16:48.403 INFO Fetch successful Nov 1 00:16:48.405408 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:16:48.410260 coreos-metadata[805]: Nov 01 00:16:48.410 INFO Fetch successful Nov 1 00:16:48.411476 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 1 00:16:48.411673 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 1 00:16:48.422804 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:16:48.428907 coreos-metadata[805]: Nov 01 00:16:48.427 INFO wrote hostname ci-4081.3.6-n-62dab69cc5 to /sysroot/etc/hostname Nov 1 00:16:48.431841 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:16:48.436183 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:16:48.541985 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:16:48.547761 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:16:48.556927 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:16:48.568661 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:48.584208 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:16:48.599615 ignition[910]: INFO : Ignition 2.19.0 Nov 1 00:16:48.599615 ignition[910]: INFO : Stage: mount Nov 1 00:16:48.602115 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:48.602115 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:48.602115 ignition[910]: INFO : mount: mount passed Nov 1 00:16:48.602115 ignition[910]: INFO : Ignition finished successfully Nov 1 00:16:48.603070 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:16:48.616262 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:16:48.657443 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:16:48.664935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:16:48.675251 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (921) Nov 1 00:16:48.675323 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:48.677774 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:48.679963 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:16:48.685670 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:16:48.687458 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:16:48.719451 ignition[938]: INFO : Ignition 2.19.0 Nov 1 00:16:48.719451 ignition[938]: INFO : Stage: files Nov 1 00:16:48.721589 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:48.721589 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:48.721589 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:16:48.724811 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:16:48.724811 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:16:48.728382 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:16:48.729440 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:16:48.730799 unknown[938]: wrote ssh authorized keys file for user: core Nov 1 00:16:48.731781 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:16:48.732776 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:16:48.732776 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:16:48.773868 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:16:48.881041 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:16:48.881041 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:16:48.883968 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:16:48.925894 systemd-networkd[752]: eth1: Gained IPv6LL Nov 1 00:16:49.181861 systemd-networkd[752]: eth0: Gained IPv6LL Nov 1 00:16:49.308740 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:16:49.806223 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:16:49.806223 ignition[938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:16:49.809766 ignition[938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:16:49.809766 ignition[938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:16:49.809766 ignition[938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:16:49.809766 ignition[938]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:16:49.809766 ignition[938]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:16:49.809766 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:16:49.809766 ignition[938]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:16:49.809766 ignition[938]: INFO : files: files passed Nov 1 00:16:49.809766 ignition[938]: INFO : Ignition finished successfully Nov 1 00:16:49.810987 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:16:49.820044 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:16:49.830037 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:16:49.836749 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:16:49.836938 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:16:49.854728 initrd-setup-root-after-ignition[967]: grep: Nov 1 00:16:49.854728 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:49.857262 initrd-setup-root-after-ignition[967]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:49.857262 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:49.857447 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:16:49.860398 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:16:49.868002 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:16:49.915675 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:16:49.915858 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:16:49.919051 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:16:49.920881 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:16:49.921983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:16:49.929023 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:16:49.957737 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:16:49.963923 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:16:49.985815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:16:49.987022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:16:49.988084 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:16:49.989573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:16:49.989783 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:16:49.991813 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:16:49.992804 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:16:49.994297 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:16:49.995882 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:16:49.997455 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:16:49.998994 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:16:50.000579 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:16:50.002371 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:16:50.003945 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:16:50.005444 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:16:50.007145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:16:50.007305 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:16:50.009401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:16:50.010550 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:16:50.011937 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:16:50.012211 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:16:50.013451 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:16:50.013601 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:16:50.016205 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:16:50.016438 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:16:50.018294 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:16:50.018417 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:16:50.020258 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:16:50.020374 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:16:50.030231 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:16:50.031077 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:16:50.031355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:16:50.033944 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:16:50.035603 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:16:50.036867 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:16:50.038985 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:16:50.039170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:16:50.049944 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:16:50.050208 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:16:50.063719 ignition[991]: INFO : Ignition 2.19.0 Nov 1 00:16:50.063719 ignition[991]: INFO : Stage: umount Nov 1 00:16:50.063719 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:50.063719 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:16:50.071340 ignition[991]: INFO : umount: umount passed Nov 1 00:16:50.071340 ignition[991]: INFO : Ignition finished successfully Nov 1 00:16:50.067577 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:16:50.067745 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:16:50.068839 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:16:50.068894 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:16:50.073082 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:16:50.073168 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:16:50.081874 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:16:50.081954 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:16:50.084134 systemd[1]: Stopped target network.target - Network. Nov 1 00:16:50.085719 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:16:50.085803 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:16:50.087364 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:16:50.088703 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:16:50.093762 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:16:50.095390 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:16:50.096943 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:16:50.098672 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:16:50.098774 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:16:50.100269 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:16:50.100348 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:16:50.101983 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:16:50.102151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:16:50.103609 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:16:50.103710 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:16:50.105128 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:16:50.106918 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:16:50.109411 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:16:50.109707 systemd-networkd[752]: eth0: DHCPv6 lease lost Nov 1 00:16:50.110456 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:16:50.110574 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:16:50.113264 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:16:50.113401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:16:50.113741 systemd-networkd[752]: eth1: DHCPv6 lease lost Nov 1 00:16:50.118028 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:16:50.118214 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:16:50.121224 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:16:50.121810 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:16:50.124234 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:16:50.124327 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:16:50.130888 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:16:50.133288 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:16:50.133380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:16:50.136221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:16:50.136305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:16:50.137818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:16:50.137872 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:16:50.139428 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:16:50.139475 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:16:50.141035 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:16:50.157374 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:16:50.158428 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:16:50.160576 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:16:50.160773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:16:50.163334 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:16:50.163404 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:16:50.165121 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:16:50.165163 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:16:50.166644 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:16:50.166706 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:16:50.171558 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:16:50.171702 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:16:50.173481 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:16:50.173574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:16:50.183043 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:16:50.184191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:16:50.184308 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:16:50.189367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:16:50.189489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:50.199430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:16:50.199645 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:16:50.202869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:16:50.208880 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:16:50.220809 systemd[1]: Switching root. Nov 1 00:16:50.285905 systemd-journald[185]: Journal stopped Nov 1 00:16:51.451205 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 1 00:16:51.451280 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:16:51.451296 kernel: SELinux: policy capability open_perms=1 Nov 1 00:16:51.451307 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:16:51.451318 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:16:51.451335 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:16:51.451346 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:16:51.451362 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:16:51.451373 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:16:51.451385 systemd[1]: Successfully loaded SELinux policy in 46.889ms. Nov 1 00:16:51.451413 kernel: audit: type=1403 audit(1761956210.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:16:51.451426 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.263ms. Nov 1 00:16:51.451439 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:16:51.451453 systemd[1]: Detected virtualization kvm. Nov 1 00:16:51.451464 systemd[1]: Detected architecture x86-64. Nov 1 00:16:51.451478 systemd[1]: Detected first boot. Nov 1 00:16:51.451491 systemd[1]: Hostname set to . Nov 1 00:16:51.451503 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:16:51.451515 zram_generator::config[1033]: No configuration found. Nov 1 00:16:51.451528 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:16:51.451539 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:16:51.451552 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:16:51.451563 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:16:51.451582 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:16:51.451595 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:16:51.451606 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:16:51.451619 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:16:51.451652 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:16:51.451664 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:16:51.451676 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:16:51.451687 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:16:51.451699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:16:51.451714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:16:51.451733 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:16:51.451744 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:16:51.451756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:16:51.451768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:16:51.451780 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:16:51.451792 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:16:51.451803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:16:51.451819 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:16:51.451831 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:16:51.451843 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:16:51.451862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:16:51.451874 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:16:51.451887 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:16:51.451898 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:16:51.451913 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:16:51.451925 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:16:51.451937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:16:51.451949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:16:51.451960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:16:51.451972 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:16:51.451984 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:16:51.451995 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:16:51.452006 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:16:51.452021 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:51.452033 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:16:51.452044 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:16:51.452056 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:16:51.452069 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:16:51.452081 systemd[1]: Reached target machines.target - Containers. Nov 1 00:16:51.452093 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:16:51.452104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:51.452120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:16:51.452134 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:16:51.452145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:51.452157 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:16:51.452168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:51.452180 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:16:51.452192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:51.452203 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:16:51.452215 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:16:51.452229 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:16:51.452241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:16:51.452252 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:16:51.452264 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:16:51.452275 kernel: ACPI: bus type drm_connector registered Nov 1 00:16:51.452287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:16:51.452297 kernel: fuse: init (API version 7.39) Nov 1 00:16:51.452309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:16:51.452320 kernel: loop: module loaded Nov 1 00:16:51.452354 systemd-journald[1116]: Collecting audit messages is disabled. Nov 1 00:16:51.452379 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:16:51.452394 systemd-journald[1116]: Journal started Nov 1 00:16:51.452419 systemd-journald[1116]: Runtime Journal (/run/log/journal/12aad9ec83334077838174407fb7dc79) is 4.9M, max 39.3M, 34.4M free. Nov 1 00:16:51.059443 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:16:51.080653 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:16:51.081275 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:16:51.462718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:16:51.467981 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:16:51.468050 systemd[1]: Stopped verity-setup.service. Nov 1 00:16:51.475679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:51.479677 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:16:51.480235 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:16:51.483854 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:16:51.484928 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:16:51.485704 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:16:51.486710 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:16:51.487530 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:16:51.488479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:16:51.489528 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:16:51.490609 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:16:51.490766 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:16:51.498988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:51.499153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:51.500266 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:16:51.500417 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:16:51.501540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:51.501826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:51.503020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:16:51.503157 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:16:51.504352 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:51.504487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:51.505450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:16:51.506736 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:16:51.507890 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:16:51.522202 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:16:51.529992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:16:51.537766 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:16:51.539747 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:16:51.539800 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:16:51.542093 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:16:51.551916 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:16:51.557777 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:16:51.558767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:51.565898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:16:51.569841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:16:51.572754 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:51.579954 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:16:51.581830 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:16:51.591166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:16:51.596686 systemd-journald[1116]: Time spent on flushing to /var/log/journal/12aad9ec83334077838174407fb7dc79 is 117.279ms for 981 entries. Nov 1 00:16:51.596686 systemd-journald[1116]: System Journal (/var/log/journal/12aad9ec83334077838174407fb7dc79) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:16:51.786108 systemd-journald[1116]: Received client request to flush runtime journal. Nov 1 00:16:51.786175 kernel: loop0: detected capacity change from 0 to 219144 Nov 1 00:16:51.786202 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:16:51.786220 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:16:51.599997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:16:51.605814 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:16:51.613432 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:16:51.615522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:16:51.619058 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:16:51.647337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:16:51.659161 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:16:51.678858 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:16:51.680024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:16:51.693203 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:16:51.761315 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:16:51.764279 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:16:51.793493 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:16:51.811080 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:16:51.826757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:16:51.845683 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 00:16:51.859360 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:16:51.871139 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:16:51.930667 kernel: loop3: detected capacity change from 0 to 8 Nov 1 00:16:51.950184 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Nov 1 00:16:51.950207 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Nov 1 00:16:51.956509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:16:51.976720 kernel: loop4: detected capacity change from 0 to 219144 Nov 1 00:16:51.999681 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 00:16:52.026133 kernel: loop6: detected capacity change from 0 to 140768 Nov 1 00:16:52.043778 kernel: loop7: detected capacity change from 0 to 8 Nov 1 00:16:52.046283 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 1 00:16:52.047035 (sd-merge)[1178]: Merged extensions into '/usr'. Nov 1 00:16:52.069896 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:16:52.070218 systemd[1]: Reloading... Nov 1 00:16:52.262494 zram_generator::config[1203]: No configuration found. Nov 1 00:16:52.262674 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:16:52.438853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:16:52.484484 systemd[1]: Reloading finished in 413 ms. Nov 1 00:16:52.517128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:16:52.518605 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:16:52.529995 systemd[1]: Starting ensure-sysext.service... Nov 1 00:16:52.538798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:16:52.554880 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:16:52.554909 systemd[1]: Reloading... Nov 1 00:16:52.592492 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:16:52.593260 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:16:52.594327 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:16:52.594683 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 1 00:16:52.594795 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 1 00:16:52.598140 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:16:52.598270 systemd-tmpfiles[1248]: Skipping /boot Nov 1 00:16:52.609324 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:16:52.611180 systemd-tmpfiles[1248]: Skipping /boot Nov 1 00:16:52.682663 zram_generator::config[1277]: No configuration found. Nov 1 00:16:52.830306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:16:52.878748 systemd[1]: Reloading finished in 323 ms. Nov 1 00:16:52.897772 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:16:52.904334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:16:52.914902 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:16:52.917811 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:16:52.923894 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:16:52.930586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:16:52.933825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:16:52.942961 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:16:52.960036 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:16:52.964492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:52.964695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:52.974003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:52.978985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:52.983932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:52.985897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:52.986105 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:52.992968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:52.993522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:52.994750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:52.994872 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:52.995524 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:16:52.998958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:16:53.009360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:53.010723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:53.017989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:16:53.019926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:53.025849 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Nov 1 00:16:53.025934 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:16:53.026750 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:53.028097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:53.028302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:53.035026 systemd[1]: Finished ensure-sysext.service. Nov 1 00:16:53.039833 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:16:53.057103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:53.057283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:53.058339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:53.064356 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:53.064600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:53.066075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:16:53.067016 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:16:53.067204 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:16:53.074906 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:16:53.077005 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:16:53.079866 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:16:53.088807 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:16:53.097979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:16:53.112207 augenrules[1362]: No rules Nov 1 00:16:53.111766 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:16:53.114457 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:16:53.195192 systemd-resolved[1324]: Positive Trust Anchors: Nov 1 00:16:53.195533 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:16:53.195611 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:16:53.200344 systemd-resolved[1324]: Using system hostname 'ci-4081.3.6-n-62dab69cc5'. Nov 1 00:16:53.202519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:16:53.203800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:16:53.232363 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:16:53.233277 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:16:53.248737 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:16:53.249758 systemd-networkd[1359]: lo: Link UP Nov 1 00:16:53.250114 systemd-networkd[1359]: lo: Gained carrier Nov 1 00:16:53.252264 systemd-networkd[1359]: Enumeration completed Nov 1 00:16:53.252757 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:16:53.253585 systemd[1]: Reached target network.target - Network. Nov 1 00:16:53.259995 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:16:53.281692 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1377) Nov 1 00:16:53.297800 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 1 00:16:53.298616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:53.298770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:53.305828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:53.309775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:53.317830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:53.319837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:53.319886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:16:53.319903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:53.324306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:53.325711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:53.336287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:53.358956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:53.359674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:53.361385 kernel: ISO 9660 Extensions: RRIP_1991A Nov 1 00:16:53.365038 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 1 00:16:53.366959 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:53.367213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:53.370476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:16:53.399662 systemd-networkd[1359]: eth1: Configuring with /run/systemd/network/10-c6:32:7f:f4:5c:26.network. Nov 1 00:16:53.400398 systemd-networkd[1359]: eth1: Link UP Nov 1 00:16:53.400403 systemd-networkd[1359]: eth1: Gained carrier Nov 1 00:16:53.407006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:16:53.409403 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:16:53.414942 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:16:53.431527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:16:53.436963 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 1 00:16:53.462097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:16:53.481665 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:16:53.490649 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:16:53.508660 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 1 00:16:53.512760 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 1 00:16:53.548747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:53.554834 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:16:53.559855 systemd-networkd[1359]: eth0: Configuring with /run/systemd/network/10-9e:3f:be:46:0e:8b.network. Nov 1 00:16:53.560609 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:16:53.561365 systemd-networkd[1359]: eth0: Link UP Nov 1 00:16:53.561481 systemd-networkd[1359]: eth0: Gained carrier Nov 1 00:16:53.564688 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:16:53.565835 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:16:53.568297 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 1 00:16:53.568371 kernel: [drm] features: -context_init Nov 1 00:16:53.578775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:16:53.579691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:53.582676 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:16:53.592652 kernel: [drm] number of scanouts: 1 Nov 1 00:16:53.592120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:53.595647 kernel: [drm] number of cap sets: 0 Nov 1 00:16:53.602657 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 1 00:16:53.623977 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 1 00:16:53.624095 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:16:53.636013 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 1 00:16:53.655581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:16:53.656379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:53.685709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:53.756339 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:16:53.772069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:53.788296 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:16:53.795983 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:16:53.812656 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:16:53.843814 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:16:53.844932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:16:53.845036 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:16:53.845245 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:16:53.845360 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:16:53.845821 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:16:53.846859 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:16:53.846963 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:16:53.847028 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:16:53.847055 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:16:53.847118 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:16:53.848231 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:16:53.851117 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:16:53.861091 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:16:53.864687 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:16:53.865367 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:16:53.866036 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:16:53.866470 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:16:53.869468 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:16:53.869510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:16:53.872775 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:16:53.880962 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:16:53.890072 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:16:53.886529 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:16:53.890743 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:16:53.897849 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:16:53.898534 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:16:53.908862 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:16:53.915812 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:16:53.916169 jq[1437]: false Nov 1 00:16:53.919890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:16:53.931606 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:16:53.947915 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:16:53.949277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:16:53.952160 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:16:53.953009 coreos-metadata[1435]: Nov 01 00:16:53.952 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:16:53.960012 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:16:53.969875 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:16:53.977590 coreos-metadata[1435]: Nov 01 00:16:53.970 INFO Fetch successful Nov 1 00:16:53.973850 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:16:53.981411 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:16:53.982749 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:16:53.983151 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:16:53.984212 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:16:53.995884 extend-filesystems[1440]: Found loop4 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found loop5 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found loop6 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found loop7 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda1 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda2 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda3 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found usr Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda4 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda6 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda7 Nov 1 00:16:54.009990 extend-filesystems[1440]: Found vda9 Nov 1 00:16:54.009990 extend-filesystems[1440]: Checking size of /dev/vda9 Nov 1 00:16:54.066881 jq[1448]: true Nov 1 00:16:54.067137 tar[1451]: linux-amd64/LICENSE Nov 1 00:16:54.067137 tar[1451]: linux-amd64/helm Nov 1 00:16:54.026166 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:16:54.025888 dbus-daemon[1436]: [system] SELinux support is enabled Nov 1 00:16:54.076255 update_engine[1447]: I20251101 00:16:54.075696 1447 main.cc:92] Flatcar Update Engine starting Nov 1 00:16:54.032489 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:16:54.032522 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:16:54.039198 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:16:54.077240 jq[1463]: true Nov 1 00:16:54.039277 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 1 00:16:54.039298 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:16:54.088805 extend-filesystems[1440]: Resized partition /dev/vda9 Nov 1 00:16:54.094058 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:16:54.092390 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:16:54.109160 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 1 00:16:54.101050 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:16:54.114043 update_engine[1447]: I20251101 00:16:54.112473 1447 update_check_scheduler.cc:74] Next update check in 3m31s Nov 1 00:16:54.112827 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:16:54.115988 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:16:54.117750 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:16:54.160661 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1356) Nov 1 00:16:54.193448 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:16:54.200246 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:16:54.243196 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 00:16:54.276562 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:16:54.276562 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 00:16:54.276562 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 00:16:54.270097 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:16:54.278810 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Nov 1 00:16:54.278810 extend-filesystems[1440]: Found vdb Nov 1 00:16:54.270308 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:16:54.316445 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:16:54.316802 systemd-logind[1445]: New seat seat0. Nov 1 00:16:54.318459 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:16:54.322149 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:16:54.322170 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:16:54.331878 systemd[1]: Starting sshkeys.service... Nov 1 00:16:54.332366 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:16:54.398400 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:16:54.412323 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:16:54.523906 coreos-metadata[1503]: Nov 01 00:16:54.523 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:16:54.555521 coreos-metadata[1503]: Nov 01 00:16:54.555 INFO Fetch successful Nov 1 00:16:54.574061 unknown[1503]: wrote ssh authorized keys file for user: core Nov 1 00:16:54.578463 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:16:54.654351 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:16:54.649731 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:16:54.655680 systemd[1]: Finished sshkeys.service. Nov 1 00:16:54.684654 containerd[1471]: time="2025-11-01T00:16:54.684075955Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:16:54.756117 containerd[1471]: time="2025-11-01T00:16:54.756038787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762038574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762094421Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762115140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762291399Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762309304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762366607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762379847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762577679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762592815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762605458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762649 containerd[1471]: time="2025-11-01T00:16:54.762615707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762971 containerd[1471]: time="2025-11-01T00:16:54.762747251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.762997 containerd[1471]: time="2025-11-01T00:16:54.762973564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:54.763119 containerd[1471]: time="2025-11-01T00:16:54.763097598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:54.763119 containerd[1471]: time="2025-11-01T00:16:54.763117299Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:16:54.763207 containerd[1471]: time="2025-11-01T00:16:54.763192585Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:16:54.763257 containerd[1471]: time="2025-11-01T00:16:54.763241902Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:16:54.770401 containerd[1471]: time="2025-11-01T00:16:54.770316491Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:16:54.770401 containerd[1471]: time="2025-11-01T00:16:54.770403997Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:16:54.770581 containerd[1471]: time="2025-11-01T00:16:54.770423807Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:16:54.772226 containerd[1471]: time="2025-11-01T00:16:54.770440030Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:16:54.772226 containerd[1471]: time="2025-11-01T00:16:54.771703625Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:16:54.772226 containerd[1471]: time="2025-11-01T00:16:54.771893978Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:16:54.772333 containerd[1471]: time="2025-11-01T00:16:54.772286331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:16:54.772425 containerd[1471]: time="2025-11-01T00:16:54.772401554Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772427785Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772441523Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772454559Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772467774Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772479982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772495026Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772509254Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772521883Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772533865Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.772559 containerd[1471]: time="2025-11-01T00:16:54.772546391Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772571886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772586199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772598848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772612563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772653335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772689323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772707312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772739426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772762736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772779745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772796055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772819506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772839238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772861930Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:16:54.773069 containerd[1471]: time="2025-11-01T00:16:54.772937266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773889 containerd[1471]: time="2025-11-01T00:16:54.772960396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.773889 containerd[1471]: time="2025-11-01T00:16:54.772978860Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:16:54.774775 containerd[1471]: time="2025-11-01T00:16:54.774678564Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774795407Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774809910Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774822678Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774832349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774845219Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774856831Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:16:54.774922 containerd[1471]: time="2025-11-01T00:16:54.774867512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:16:54.775335 containerd[1471]: time="2025-11-01T00:16:54.775182078Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:16:54.775335 containerd[1471]: time="2025-11-01T00:16:54.775246355Z" level=info msg="Connect containerd service" Nov 1 00:16:54.775335 containerd[1471]: time="2025-11-01T00:16:54.775283460Z" level=info msg="using legacy CRI server" Nov 1 00:16:54.775335 containerd[1471]: time="2025-11-01T00:16:54.775291335Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:16:54.776192 containerd[1471]: time="2025-11-01T00:16:54.775412609Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:16:54.778315 containerd[1471]: time="2025-11-01T00:16:54.778021701Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:16:54.778315 containerd[1471]: time="2025-11-01T00:16:54.778167686Z" level=info msg="Start subscribing containerd event" Nov 1 00:16:54.778315 containerd[1471]: time="2025-11-01T00:16:54.778226672Z" level=info msg="Start recovering state" Nov 1 00:16:54.778315 containerd[1471]: time="2025-11-01T00:16:54.778301162Z" level=info msg="Start event monitor" Nov 1 00:16:54.778315 containerd[1471]: time="2025-11-01T00:16:54.778317833Z" level=info msg="Start snapshots syncer" Nov 1 00:16:54.778448 containerd[1471]: time="2025-11-01T00:16:54.778326988Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:16:54.778448 containerd[1471]: time="2025-11-01T00:16:54.778335431Z" level=info msg="Start streaming server" Nov 1 00:16:54.780468 containerd[1471]: time="2025-11-01T00:16:54.778848471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:16:54.780468 containerd[1471]: time="2025-11-01T00:16:54.778905985Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:16:54.780468 containerd[1471]: time="2025-11-01T00:16:54.778962135Z" level=info msg="containerd successfully booted in 0.097771s" Nov 1 00:16:54.779071 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:16:54.813819 systemd-networkd[1359]: eth0: Gained IPv6LL Nov 1 00:16:54.814539 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:16:54.817836 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:16:54.821457 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:16:54.830907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:16:54.839973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:16:54.885759 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:16:54.923903 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:16:54.956744 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:16:54.968061 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:16:54.988256 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:16:54.988497 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:16:54.998123 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:16:55.035302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:16:55.046237 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:16:55.060014 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:16:55.086232 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:16:55.329051 systemd-networkd[1359]: eth1: Gained IPv6LL Nov 1 00:16:55.330923 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:16:55.502745 tar[1451]: linux-amd64/README.md Nov 1 00:16:55.528516 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:16:56.238146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:16:56.242023 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:16:56.243779 systemd[1]: Startup finished in 1.352s (kernel) + 5.693s (initrd) + 5.786s (userspace) = 12.832s. Nov 1 00:16:56.251342 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:16:56.865083 kubelet[1559]: E1101 00:16:56.864962 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:16:56.866965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:16:56.867126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:16:56.867950 systemd[1]: kubelet.service: Consumed 1.479s CPU time. Nov 1 00:16:58.549502 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:16:58.562037 systemd[1]: Started sshd@0-146.190.126.63:22-139.178.68.195:38148.service - OpenSSH per-connection server daemon (139.178.68.195:38148). Nov 1 00:16:58.621795 sshd[1572]: Accepted publickey for core from 139.178.68.195 port 38148 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:58.623612 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:58.634895 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:16:58.640131 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:16:58.642678 systemd-logind[1445]: New session 1 of user core. Nov 1 00:16:58.666930 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:16:58.675997 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:16:58.679477 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:16:58.788658 systemd[1576]: Queued start job for default target default.target. Nov 1 00:16:58.798953 systemd[1576]: Created slice app.slice - User Application Slice. Nov 1 00:16:58.798991 systemd[1576]: Reached target paths.target - Paths. Nov 1 00:16:58.799006 systemd[1576]: Reached target timers.target - Timers. Nov 1 00:16:58.800945 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:16:58.814752 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:16:58.814889 systemd[1576]: Reached target sockets.target - Sockets. Nov 1 00:16:58.814905 systemd[1576]: Reached target basic.target - Basic System. Nov 1 00:16:58.814955 systemd[1576]: Reached target default.target - Main User Target. Nov 1 00:16:58.814992 systemd[1576]: Startup finished in 127ms. Nov 1 00:16:58.815177 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:16:58.823920 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:16:58.895738 systemd[1]: Started sshd@1-146.190.126.63:22-139.178.68.195:38158.service - OpenSSH per-connection server daemon (139.178.68.195:38158). Nov 1 00:16:58.933110 sshd[1587]: Accepted publickey for core from 139.178.68.195 port 38158 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:58.934951 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:58.940003 systemd-logind[1445]: New session 2 of user core. Nov 1 00:16:58.949946 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:16:59.011844 sshd[1587]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:59.022322 systemd[1]: sshd@1-146.190.126.63:22-139.178.68.195:38158.service: Deactivated successfully. Nov 1 00:16:59.024253 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:16:59.026972 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:16:59.030148 systemd[1]: Started sshd@2-146.190.126.63:22-139.178.68.195:38170.service - OpenSSH per-connection server daemon (139.178.68.195:38170). Nov 1 00:16:59.032037 systemd-logind[1445]: Removed session 2. Nov 1 00:16:59.070263 sshd[1594]: Accepted publickey for core from 139.178.68.195 port 38170 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:59.071819 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:59.077167 systemd-logind[1445]: New session 3 of user core. Nov 1 00:16:59.084940 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:16:59.142741 sshd[1594]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:59.153220 systemd[1]: sshd@2-146.190.126.63:22-139.178.68.195:38170.service: Deactivated successfully. Nov 1 00:16:59.155604 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:16:59.157608 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:16:59.162010 systemd[1]: Started sshd@3-146.190.126.63:22-139.178.68.195:38180.service - OpenSSH per-connection server daemon (139.178.68.195:38180). Nov 1 00:16:59.163568 systemd-logind[1445]: Removed session 3. Nov 1 00:16:59.201745 sshd[1601]: Accepted publickey for core from 139.178.68.195 port 38180 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:59.203337 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:59.208127 systemd-logind[1445]: New session 4 of user core. Nov 1 00:16:59.218908 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:16:59.282983 sshd[1601]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:59.296380 systemd[1]: sshd@3-146.190.126.63:22-139.178.68.195:38180.service: Deactivated successfully. Nov 1 00:16:59.299358 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:16:59.300165 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:16:59.336310 systemd[1]: Started sshd@4-146.190.126.63:22-139.178.68.195:38192.service - OpenSSH per-connection server daemon (139.178.68.195:38192). Nov 1 00:16:59.338418 systemd-logind[1445]: Removed session 4. Nov 1 00:16:59.380044 sshd[1608]: Accepted publickey for core from 139.178.68.195 port 38192 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:59.381550 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:59.386463 systemd-logind[1445]: New session 5 of user core. Nov 1 00:16:59.398256 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:16:59.465774 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:16:59.466235 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:59.482566 sudo[1611]: pam_unix(sudo:session): session closed for user root Nov 1 00:16:59.486438 sshd[1608]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:59.502665 systemd[1]: sshd@4-146.190.126.63:22-139.178.68.195:38192.service: Deactivated successfully. Nov 1 00:16:59.504540 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:16:59.506363 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:16:59.511999 systemd[1]: Started sshd@5-146.190.126.63:22-139.178.68.195:38196.service - OpenSSH per-connection server daemon (139.178.68.195:38196). Nov 1 00:16:59.513072 systemd-logind[1445]: Removed session 5. Nov 1 00:16:59.557175 sshd[1616]: Accepted publickey for core from 139.178.68.195 port 38196 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:59.559309 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:59.566195 systemd-logind[1445]: New session 6 of user core. Nov 1 00:16:59.575915 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:16:59.641139 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:16:59.641494 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:59.647579 sudo[1620]: pam_unix(sudo:session): session closed for user root Nov 1 00:16:59.655261 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:16:59.655696 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:59.676097 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:16:59.679187 auditctl[1623]: No rules Nov 1 00:16:59.679590 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:16:59.679921 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:16:59.689250 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:16:59.720408 augenrules[1641]: No rules Nov 1 00:16:59.721513 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:16:59.723348 sudo[1619]: pam_unix(sudo:session): session closed for user root Nov 1 00:16:59.728022 sshd[1616]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:59.734683 systemd[1]: sshd@5-146.190.126.63:22-139.178.68.195:38196.service: Deactivated successfully. Nov 1 00:16:59.736943 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:16:59.739751 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:16:59.744184 systemd[1]: Started sshd@6-146.190.126.63:22-139.178.68.195:38200.service - OpenSSH per-connection server daemon (139.178.68.195:38200). Nov 1 00:16:59.746109 systemd-logind[1445]: Removed session 6. Nov 1 00:16:59.789334 sshd[1649]: Accepted publickey for core from 139.178.68.195 port 38200 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:16:59.791457 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:59.798829 systemd-logind[1445]: New session 7 of user core. Nov 1 00:16:59.804928 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:16:59.864056 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:16:59.864444 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:17:00.390293 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:17:00.390910 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:17:00.883935 dockerd[1667]: time="2025-11-01T00:17:00.882679489Z" level=info msg="Starting up" Nov 1 00:17:01.005955 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport345891769-merged.mount: Deactivated successfully. Nov 1 00:17:01.036045 dockerd[1667]: time="2025-11-01T00:17:01.035992732Z" level=info msg="Loading containers: start." Nov 1 00:17:01.181884 kernel: Initializing XFRM netlink socket Nov 1 00:17:01.219834 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:17:01.222075 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:17:01.233676 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:17:01.298347 systemd-networkd[1359]: docker0: Link UP Nov 1 00:17:01.299705 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. Nov 1 00:17:01.323513 dockerd[1667]: time="2025-11-01T00:17:01.323426142Z" level=info msg="Loading containers: done." Nov 1 00:17:01.350410 dockerd[1667]: time="2025-11-01T00:17:01.350320576Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:17:01.350706 dockerd[1667]: time="2025-11-01T00:17:01.350492173Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:17:01.350706 dockerd[1667]: time="2025-11-01T00:17:01.350646666Z" level=info msg="Daemon has completed initialization" Nov 1 00:17:01.402095 dockerd[1667]: time="2025-11-01T00:17:01.400931670Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:17:01.401775 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:17:02.215334 containerd[1471]: time="2025-11-01T00:17:02.215250333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:17:02.914060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount38855034.mount: Deactivated successfully. Nov 1 00:17:04.112929 containerd[1471]: time="2025-11-01T00:17:04.112836929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:04.114448 containerd[1471]: time="2025-11-01T00:17:04.114357860Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 1 00:17:04.116666 containerd[1471]: time="2025-11-01T00:17:04.115125839Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:04.118523 containerd[1471]: time="2025-11-01T00:17:04.118488503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:04.119741 containerd[1471]: time="2025-11-01T00:17:04.119700040Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.904390063s" Nov 1 00:17:04.119741 containerd[1471]: time="2025-11-01T00:17:04.119744053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:17:04.121150 containerd[1471]: time="2025-11-01T00:17:04.121111724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:17:05.447558 containerd[1471]: time="2025-11-01T00:17:05.447478180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:05.448948 containerd[1471]: time="2025-11-01T00:17:05.448882703Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 1 00:17:05.449664 containerd[1471]: time="2025-11-01T00:17:05.449531104Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:05.454044 containerd[1471]: time="2025-11-01T00:17:05.452768144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:05.454044 containerd[1471]: time="2025-11-01T00:17:05.453904227Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.33275634s" Nov 1 00:17:05.454044 containerd[1471]: time="2025-11-01T00:17:05.453937341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:17:05.454964 containerd[1471]: time="2025-11-01T00:17:05.454942475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:17:06.578755 containerd[1471]: time="2025-11-01T00:17:06.577822828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:06.580053 containerd[1471]: time="2025-11-01T00:17:06.579767738Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 1 00:17:06.581077 containerd[1471]: time="2025-11-01T00:17:06.580710692Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:06.583417 containerd[1471]: time="2025-11-01T00:17:06.583387133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:06.584912 containerd[1471]: time="2025-11-01T00:17:06.584870102Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.129820725s" Nov 1 00:17:06.584973 containerd[1471]: time="2025-11-01T00:17:06.584915225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:17:06.585547 containerd[1471]: time="2025-11-01T00:17:06.585488866Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:17:07.117501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:17:07.124599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:07.386975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:07.391646 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:17:07.462840 kubelet[1889]: E1101 00:17:07.462796 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:07.467473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:07.467654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:07.849586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063573828.mount: Deactivated successfully. Nov 1 00:17:08.367209 containerd[1471]: time="2025-11-01T00:17:08.367149696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:08.368160 containerd[1471]: time="2025-11-01T00:17:08.368105923Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 1 00:17:08.368663 containerd[1471]: time="2025-11-01T00:17:08.368608839Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:08.371198 containerd[1471]: time="2025-11-01T00:17:08.371152761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:08.371841 containerd[1471]: time="2025-11-01T00:17:08.371803983Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.786276604s" Nov 1 00:17:08.371841 containerd[1471]: time="2025-11-01T00:17:08.371841017Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:17:08.372495 containerd[1471]: time="2025-11-01T00:17:08.372453660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:17:08.412416 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 1 00:17:08.976151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27574365.mount: Deactivated successfully. Nov 1 00:17:09.955840 containerd[1471]: time="2025-11-01T00:17:09.955769200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:09.957122 containerd[1471]: time="2025-11-01T00:17:09.957071569Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 1 00:17:09.958268 containerd[1471]: time="2025-11-01T00:17:09.957728501Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:09.963182 containerd[1471]: time="2025-11-01T00:17:09.963141256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:09.965791 containerd[1471]: time="2025-11-01T00:17:09.965733548Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.593225754s" Nov 1 00:17:09.965934 containerd[1471]: time="2025-11-01T00:17:09.965910815Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:17:09.968705 containerd[1471]: time="2025-11-01T00:17:09.968523007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:17:10.527949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978095746.mount: Deactivated successfully. Nov 1 00:17:10.534230 containerd[1471]: time="2025-11-01T00:17:10.533365556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:10.534895 containerd[1471]: time="2025-11-01T00:17:10.534859345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 1 00:17:10.535596 containerd[1471]: time="2025-11-01T00:17:10.535561575Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:10.537514 containerd[1471]: time="2025-11-01T00:17:10.537480889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:10.538522 containerd[1471]: time="2025-11-01T00:17:10.538490927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 569.688852ms" Nov 1 00:17:10.538642 containerd[1471]: time="2025-11-01T00:17:10.538612070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:17:10.539515 containerd[1471]: time="2025-11-01T00:17:10.539486283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:17:11.517875 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 1 00:17:13.409666 containerd[1471]: time="2025-11-01T00:17:13.408059726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:13.409666 containerd[1471]: time="2025-11-01T00:17:13.409258611Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 1 00:17:13.410367 containerd[1471]: time="2025-11-01T00:17:13.410337636Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:13.413734 containerd[1471]: time="2025-11-01T00:17:13.413689436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:13.415150 containerd[1471]: time="2025-11-01T00:17:13.415110096Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.875591043s" Nov 1 00:17:13.415229 containerd[1471]: time="2025-11-01T00:17:13.415159041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:17:17.510476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:17:17.519873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:17.597199 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:17:17.598069 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:17:17.598538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:17.609155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:17.648145 systemd[1]: Reloading requested from client PID 2029 ('systemctl') (unit session-7.scope)... Nov 1 00:17:17.648327 systemd[1]: Reloading... Nov 1 00:17:17.794665 zram_generator::config[2070]: No configuration found. Nov 1 00:17:17.914429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:17:17.986874 systemd[1]: Reloading finished in 338 ms. Nov 1 00:17:18.046214 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:17:18.046453 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:17:18.046830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:18.052989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:18.191262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:18.201099 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:17:18.251226 kubelet[2123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:17:18.251591 kubelet[2123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:17:18.251799 kubelet[2123]: I1101 00:17:18.251757 2123 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:17:19.053777 kubelet[2123]: I1101 00:17:19.053719 2123 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:17:19.053777 kubelet[2123]: I1101 00:17:19.053755 2123 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:17:19.055070 kubelet[2123]: I1101 00:17:19.055013 2123 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:17:19.056026 kubelet[2123]: I1101 00:17:19.056000 2123 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:17:19.056299 kubelet[2123]: I1101 00:17:19.056276 2123 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:17:19.068578 kubelet[2123]: I1101 00:17:19.067444 2123 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:17:19.068578 kubelet[2123]: E1101 00:17:19.068523 2123 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://146.190.126.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:17:19.074089 kubelet[2123]: E1101 00:17:19.074048 2123 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:17:19.074225 kubelet[2123]: I1101 00:17:19.074132 2123 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:17:19.077173 kubelet[2123]: I1101 00:17:19.077147 2123 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:17:19.081159 kubelet[2123]: I1101 00:17:19.081102 2123 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:17:19.082583 kubelet[2123]: I1101 00:17:19.081149 2123 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-62dab69cc5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:17:19.082583 kubelet[2123]: I1101 00:17:19.082583 2123 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:17:19.082773 kubelet[2123]: I1101 00:17:19.082597 2123 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:17:19.082773 kubelet[2123]: I1101 00:17:19.082719 2123 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:17:19.086734 kubelet[2123]: I1101 00:17:19.086421 2123 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:19.088424 kubelet[2123]: I1101 00:17:19.088049 2123 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:17:19.088424 kubelet[2123]: I1101 00:17:19.088072 2123 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:17:19.088424 kubelet[2123]: I1101 00:17:19.088099 2123 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:17:19.088424 kubelet[2123]: I1101 00:17:19.088121 2123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:17:19.090419 kubelet[2123]: E1101 00:17:19.090103 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://146.190.126.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-62dab69cc5&limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:17:19.092652 kubelet[2123]: I1101 00:17:19.090601 2123 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:17:19.092652 kubelet[2123]: I1101 00:17:19.091134 2123 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:17:19.092652 kubelet[2123]: I1101 00:17:19.091161 2123 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:17:19.092652 kubelet[2123]: W1101 00:17:19.091210 2123 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:17:19.094355 kubelet[2123]: I1101 00:17:19.094333 2123 server.go:1262] "Started kubelet" Nov 1 00:17:19.096980 kubelet[2123]: E1101 00:17:19.096953 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://146.190.126.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:17:19.097841 kubelet[2123]: I1101 00:17:19.097082 2123 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:17:19.098066 kubelet[2123]: I1101 00:17:19.098043 2123 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:17:19.098256 kubelet[2123]: I1101 00:17:19.098216 2123 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:17:19.098356 kubelet[2123]: I1101 00:17:19.098342 2123 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:17:19.098745 kubelet[2123]: I1101 00:17:19.098729 2123 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:17:19.100740 kubelet[2123]: E1101 00:17:19.099667 2123 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.126.63:6443/api/v1/namespaces/default/events\": dial tcp 146.190.126.63:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-62dab69cc5.1873b9dd3cffaad3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-62dab69cc5,UID:ci-4081.3.6-n-62dab69cc5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-62dab69cc5,},FirstTimestamp:2025-11-01 00:17:19.094295251 +0000 UTC m=+0.888882672,LastTimestamp:2025-11-01 00:17:19.094295251 +0000 UTC m=+0.888882672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-62dab69cc5,}" Nov 1 00:17:19.102840 kubelet[2123]: I1101 00:17:19.102816 2123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:17:19.103516 kubelet[2123]: I1101 00:17:19.103497 2123 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:17:19.111847 kubelet[2123]: E1101 00:17:19.111819 2123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" Nov 1 00:17:19.112023 kubelet[2123]: I1101 00:17:19.112014 2123 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:17:19.112253 kubelet[2123]: I1101 00:17:19.112240 2123 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:17:19.112351 kubelet[2123]: I1101 00:17:19.112343 2123 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:17:19.113389 kubelet[2123]: E1101 00:17:19.112805 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://146.190.126.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:17:19.113797 kubelet[2123]: E1101 00:17:19.113775 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-62dab69cc5?timeout=10s\": dial tcp 146.190.126.63:6443: connect: connection refused" interval="200ms" Nov 1 00:17:19.113960 kubelet[2123]: E1101 00:17:19.113946 2123 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:17:19.114213 kubelet[2123]: I1101 00:17:19.114197 2123 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:17:19.117490 kubelet[2123]: I1101 00:17:19.117431 2123 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:17:19.117840 kubelet[2123]: I1101 00:17:19.117815 2123 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:17:19.142020 kubelet[2123]: I1101 00:17:19.141875 2123 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:17:19.142240 kubelet[2123]: I1101 00:17:19.142162 2123 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:17:19.142240 kubelet[2123]: I1101 00:17:19.142201 2123 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:19.147786 kubelet[2123]: I1101 00:17:19.147712 2123 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:17:19.151973 kubelet[2123]: I1101 00:17:19.151846 2123 policy_none.go:49] "None policy: Start" Nov 1 00:17:19.151973 kubelet[2123]: I1101 00:17:19.151890 2123 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:17:19.151973 kubelet[2123]: I1101 00:17:19.151911 2123 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:17:19.153847 kubelet[2123]: I1101 00:17:19.153735 2123 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:17:19.154717 kubelet[2123]: I1101 00:17:19.153919 2123 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:17:19.154717 kubelet[2123]: I1101 00:17:19.153960 2123 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:17:19.154717 kubelet[2123]: E1101 00:17:19.154591 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://146.190.126.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:17:19.155258 kubelet[2123]: E1101 00:17:19.155224 2123 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:17:19.156596 kubelet[2123]: I1101 00:17:19.156003 2123 policy_none.go:47] "Start" Nov 1 00:17:19.163592 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:17:19.179492 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:17:19.183773 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:17:19.195252 kubelet[2123]: E1101 00:17:19.194887 2123 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:17:19.195252 kubelet[2123]: I1101 00:17:19.195103 2123 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:17:19.195252 kubelet[2123]: I1101 00:17:19.195124 2123 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:17:19.196362 kubelet[2123]: I1101 00:17:19.196321 2123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:17:19.198237 kubelet[2123]: E1101 00:17:19.198212 2123 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:17:19.198457 kubelet[2123]: E1101 00:17:19.198267 2123 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-62dab69cc5\" not found" Nov 1 00:17:19.270781 systemd[1]: Created slice kubepods-burstable-podabd4c9dbfc4927a8da0f37f9f9e3e2fd.slice - libcontainer container kubepods-burstable-podabd4c9dbfc4927a8da0f37f9f9e3e2fd.slice. Nov 1 00:17:19.282044 kubelet[2123]: E1101 00:17:19.281747 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.286217 systemd[1]: Created slice kubepods-burstable-podfd8daf47d94895b0344c3e9fe8b82aac.slice - libcontainer container kubepods-burstable-podfd8daf47d94895b0344c3e9fe8b82aac.slice. Nov 1 00:17:19.295758 kubelet[2123]: E1101 00:17:19.295498 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.296907 kubelet[2123]: I1101 00:17:19.296860 2123 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.297583 kubelet[2123]: E1101 00:17:19.297550 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.63:6443/api/v1/nodes\": dial tcp 146.190.126.63:6443: connect: connection refused" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.300467 systemd[1]: Created slice kubepods-burstable-pod935debbb5ccf49cf027aec7a9f298177.slice - libcontainer container kubepods-burstable-pod935debbb5ccf49cf027aec7a9f298177.slice. Nov 1 00:17:19.302350 kubelet[2123]: E1101 00:17:19.302319 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.313840 kubelet[2123]: I1101 00:17:19.313659 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.313840 kubelet[2123]: I1101 00:17:19.313706 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.313840 kubelet[2123]: I1101 00:17:19.313728 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.313840 kubelet[2123]: I1101 00:17:19.313747 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd8daf47d94895b0344c3e9fe8b82aac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-62dab69cc5\" (UID: \"fd8daf47d94895b0344c3e9fe8b82aac\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.313840 kubelet[2123]: I1101 00:17:19.313776 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/935debbb5ccf49cf027aec7a9f298177-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" (UID: \"935debbb5ccf49cf027aec7a9f298177\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.314153 kubelet[2123]: I1101 00:17:19.313795 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/935debbb5ccf49cf027aec7a9f298177-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" (UID: \"935debbb5ccf49cf027aec7a9f298177\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.314153 kubelet[2123]: I1101 00:17:19.313809 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.314153 kubelet[2123]: I1101 00:17:19.313824 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.314153 kubelet[2123]: I1101 00:17:19.313856 2123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/935debbb5ccf49cf027aec7a9f298177-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" (UID: \"935debbb5ccf49cf027aec7a9f298177\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.316819 kubelet[2123]: E1101 00:17:19.316743 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-62dab69cc5?timeout=10s\": dial tcp 146.190.126.63:6443: connect: connection refused" interval="400ms" Nov 1 00:17:19.499616 kubelet[2123]: I1101 00:17:19.499575 2123 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.499966 kubelet[2123]: E1101 00:17:19.499933 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.63:6443/api/v1/nodes\": dial tcp 146.190.126.63:6443: connect: connection refused" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.588017 kubelet[2123]: E1101 00:17:19.587873 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:19.588920 containerd[1471]: time="2025-11-01T00:17:19.588863965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-62dab69cc5,Uid:abd4c9dbfc4927a8da0f37f9f9e3e2fd,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:19.590789 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 1 00:17:19.599099 kubelet[2123]: E1101 00:17:19.599047 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:19.599714 containerd[1471]: time="2025-11-01T00:17:19.599620702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-62dab69cc5,Uid:fd8daf47d94895b0344c3e9fe8b82aac,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:19.606955 kubelet[2123]: E1101 00:17:19.606746 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:19.607680 containerd[1471]: time="2025-11-01T00:17:19.607453531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-62dab69cc5,Uid:935debbb5ccf49cf027aec7a9f298177,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:19.718193 kubelet[2123]: E1101 00:17:19.718145 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-62dab69cc5?timeout=10s\": dial tcp 146.190.126.63:6443: connect: connection refused" interval="800ms" Nov 1 00:17:19.901845 kubelet[2123]: I1101 00:17:19.901347 2123 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:19.901845 kubelet[2123]: E1101 00:17:19.901779 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.63:6443/api/v1/nodes\": dial tcp 146.190.126.63:6443: connect: connection refused" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:20.012587 kubelet[2123]: E1101 00:17:20.012087 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://146.190.126.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:17:20.012587 kubelet[2123]: E1101 00:17:20.012089 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://146.190.126.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:17:20.021585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761319711.mount: Deactivated successfully. Nov 1 00:17:20.028663 containerd[1471]: time="2025-11-01T00:17:20.028590654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:20.029494 containerd[1471]: time="2025-11-01T00:17:20.029434209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:20.030235 containerd[1471]: time="2025-11-01T00:17:20.030191902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:17:20.031061 containerd[1471]: time="2025-11-01T00:17:20.031012989Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:20.032415 containerd[1471]: time="2025-11-01T00:17:20.032041657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:17:20.032415 containerd[1471]: time="2025-11-01T00:17:20.032287936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:17:20.032415 containerd[1471]: time="2025-11-01T00:17:20.032358199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:20.035537 containerd[1471]: time="2025-11-01T00:17:20.035507014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:20.036978 containerd[1471]: time="2025-11-01T00:17:20.036940495Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.983192ms" Nov 1 00:17:20.038430 containerd[1471]: time="2025-11-01T00:17:20.038163091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 438.37424ms" Nov 1 00:17:20.040924 containerd[1471]: time="2025-11-01T00:17:20.040784771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 433.229915ms" Nov 1 00:17:20.054200 kubelet[2123]: E1101 00:17:20.052859 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://146.190.126.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-62dab69cc5&limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:17:20.238332 containerd[1471]: time="2025-11-01T00:17:20.235898331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:20.238332 containerd[1471]: time="2025-11-01T00:17:20.235980964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:20.238332 containerd[1471]: time="2025-11-01T00:17:20.236008121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.238332 containerd[1471]: time="2025-11-01T00:17:20.236120194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.243384 containerd[1471]: time="2025-11-01T00:17:20.240847684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:20.243384 containerd[1471]: time="2025-11-01T00:17:20.240934194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:20.243384 containerd[1471]: time="2025-11-01T00:17:20.241000523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.243384 containerd[1471]: time="2025-11-01T00:17:20.241141098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.256872 containerd[1471]: time="2025-11-01T00:17:20.256447464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:20.256872 containerd[1471]: time="2025-11-01T00:17:20.256537074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:20.256872 containerd[1471]: time="2025-11-01T00:17:20.256568459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.257120 containerd[1471]: time="2025-11-01T00:17:20.256785939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.281856 systemd[1]: Started cri-containerd-246d01943053f2c0f9d91ccd75cb736eb0128f6f06ce640a083984909cb34e04.scope - libcontainer container 246d01943053f2c0f9d91ccd75cb736eb0128f6f06ce640a083984909cb34e04. Nov 1 00:17:20.289512 systemd[1]: Started cri-containerd-eb0fcae10c7e07cd8c1ba4eb04d5946c4e1a501c8914d21a43877330e641946a.scope - libcontainer container eb0fcae10c7e07cd8c1ba4eb04d5946c4e1a501c8914d21a43877330e641946a. Nov 1 00:17:20.312843 systemd[1]: Started cri-containerd-8d9a197c5312acf300119d7e31d54852720ceefef270467c5482f827b985ba7e.scope - libcontainer container 8d9a197c5312acf300119d7e31d54852720ceefef270467c5482f827b985ba7e. Nov 1 00:17:20.391542 containerd[1471]: time="2025-11-01T00:17:20.391473753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-62dab69cc5,Uid:935debbb5ccf49cf027aec7a9f298177,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9a197c5312acf300119d7e31d54852720ceefef270467c5482f827b985ba7e\"" Nov 1 00:17:20.395656 kubelet[2123]: E1101 00:17:20.395464 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:20.405598 containerd[1471]: time="2025-11-01T00:17:20.405282671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-62dab69cc5,Uid:fd8daf47d94895b0344c3e9fe8b82aac,Namespace:kube-system,Attempt:0,} returns sandbox id \"246d01943053f2c0f9d91ccd75cb736eb0128f6f06ce640a083984909cb34e04\"" Nov 1 00:17:20.409280 containerd[1471]: time="2025-11-01T00:17:20.408941587Z" level=info msg="CreateContainer within sandbox \"8d9a197c5312acf300119d7e31d54852720ceefef270467c5482f827b985ba7e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:17:20.410215 kubelet[2123]: E1101 00:17:20.410060 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:20.410308 containerd[1471]: time="2025-11-01T00:17:20.410108364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-62dab69cc5,Uid:abd4c9dbfc4927a8da0f37f9f9e3e2fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb0fcae10c7e07cd8c1ba4eb04d5946c4e1a501c8914d21a43877330e641946a\"" Nov 1 00:17:20.412695 kubelet[2123]: E1101 00:17:20.412663 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:20.417067 containerd[1471]: time="2025-11-01T00:17:20.417030471Z" level=info msg="CreateContainer within sandbox \"246d01943053f2c0f9d91ccd75cb736eb0128f6f06ce640a083984909cb34e04\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:17:20.420477 containerd[1471]: time="2025-11-01T00:17:20.420378680Z" level=info msg="CreateContainer within sandbox \"eb0fcae10c7e07cd8c1ba4eb04d5946c4e1a501c8914d21a43877330e641946a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:17:20.433448 containerd[1471]: time="2025-11-01T00:17:20.433358899Z" level=info msg="CreateContainer within sandbox \"8d9a197c5312acf300119d7e31d54852720ceefef270467c5482f827b985ba7e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4d78fce392b3cf91a0e1f7d072bcbcb12843fdfcc8151bd4b7a1cea95530915\"" Nov 1 00:17:20.434307 containerd[1471]: time="2025-11-01T00:17:20.434268820Z" level=info msg="StartContainer for \"d4d78fce392b3cf91a0e1f7d072bcbcb12843fdfcc8151bd4b7a1cea95530915\"" Nov 1 00:17:20.442168 containerd[1471]: time="2025-11-01T00:17:20.442108968Z" level=info msg="CreateContainer within sandbox \"246d01943053f2c0f9d91ccd75cb736eb0128f6f06ce640a083984909cb34e04\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b3f0abcdcef2bcb178b0afeb086b1d6a4ae4aeeae4050d459809e4f73b42a90\"" Nov 1 00:17:20.443800 containerd[1471]: time="2025-11-01T00:17:20.443719639Z" level=info msg="StartContainer for \"6b3f0abcdcef2bcb178b0afeb086b1d6a4ae4aeeae4050d459809e4f73b42a90\"" Nov 1 00:17:20.447718 containerd[1471]: time="2025-11-01T00:17:20.447614190Z" level=info msg="CreateContainer within sandbox \"eb0fcae10c7e07cd8c1ba4eb04d5946c4e1a501c8914d21a43877330e641946a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"764f631aabe8e36f95f3ee0e7a1ceb1df3bb77ba4e9d35e4031bd5a3b69a2c79\"" Nov 1 00:17:20.449793 containerd[1471]: time="2025-11-01T00:17:20.448456743Z" level=info msg="StartContainer for \"764f631aabe8e36f95f3ee0e7a1ceb1df3bb77ba4e9d35e4031bd5a3b69a2c79\"" Nov 1 00:17:20.492205 systemd[1]: Started cri-containerd-d4d78fce392b3cf91a0e1f7d072bcbcb12843fdfcc8151bd4b7a1cea95530915.scope - libcontainer container d4d78fce392b3cf91a0e1f7d072bcbcb12843fdfcc8151bd4b7a1cea95530915. Nov 1 00:17:20.502909 systemd[1]: Started cri-containerd-6b3f0abcdcef2bcb178b0afeb086b1d6a4ae4aeeae4050d459809e4f73b42a90.scope - libcontainer container 6b3f0abcdcef2bcb178b0afeb086b1d6a4ae4aeeae4050d459809e4f73b42a90. Nov 1 00:17:20.519684 kubelet[2123]: E1101 00:17:20.519523 2123 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.126.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-62dab69cc5?timeout=10s\": dial tcp 146.190.126.63:6443: connect: connection refused" interval="1.6s" Nov 1 00:17:20.526920 systemd[1]: Started cri-containerd-764f631aabe8e36f95f3ee0e7a1ceb1df3bb77ba4e9d35e4031bd5a3b69a2c79.scope - libcontainer container 764f631aabe8e36f95f3ee0e7a1ceb1df3bb77ba4e9d35e4031bd5a3b69a2c79. Nov 1 00:17:20.555557 kubelet[2123]: E1101 00:17:20.555181 2123 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://146.190.126.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.126.63:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:17:20.577672 containerd[1471]: time="2025-11-01T00:17:20.577232527Z" level=info msg="StartContainer for \"d4d78fce392b3cf91a0e1f7d072bcbcb12843fdfcc8151bd4b7a1cea95530915\" returns successfully" Nov 1 00:17:20.595658 containerd[1471]: time="2025-11-01T00:17:20.595569936Z" level=info msg="StartContainer for \"6b3f0abcdcef2bcb178b0afeb086b1d6a4ae4aeeae4050d459809e4f73b42a90\" returns successfully" Nov 1 00:17:20.641133 containerd[1471]: time="2025-11-01T00:17:20.640947925Z" level=info msg="StartContainer for \"764f631aabe8e36f95f3ee0e7a1ceb1df3bb77ba4e9d35e4031bd5a3b69a2c79\" returns successfully" Nov 1 00:17:20.702768 kubelet[2123]: I1101 00:17:20.702732 2123 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:20.703164 kubelet[2123]: E1101 00:17:20.703125 2123 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.126.63:6443/api/v1/nodes\": dial tcp 146.190.126.63:6443: connect: connection refused" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:21.164199 kubelet[2123]: E1101 00:17:21.164161 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:21.164438 kubelet[2123]: E1101 00:17:21.164300 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:21.168650 kubelet[2123]: E1101 00:17:21.167992 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:21.168650 kubelet[2123]: E1101 00:17:21.168110 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:21.171656 kubelet[2123]: E1101 00:17:21.171076 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:21.171656 kubelet[2123]: E1101 00:17:21.171451 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:22.173657 kubelet[2123]: E1101 00:17:22.173586 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.174269 kubelet[2123]: E1101 00:17:22.173808 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:22.174269 kubelet[2123]: E1101 00:17:22.174123 2123 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.174269 kubelet[2123]: E1101 00:17:22.174211 2123 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:22.305316 kubelet[2123]: I1101 00:17:22.305276 2123 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.599186 kubelet[2123]: E1101 00:17:22.599128 2123 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-62dab69cc5\" not found" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.691434 kubelet[2123]: I1101 00:17:22.691392 2123 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.691619 kubelet[2123]: E1101 00:17:22.691453 2123 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-62dab69cc5\": node \"ci-4081.3.6-n-62dab69cc5\" not found" Nov 1 00:17:22.714045 kubelet[2123]: I1101 00:17:22.713975 2123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.730751 kubelet[2123]: E1101 00:17:22.730699 2123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.730751 kubelet[2123]: I1101 00:17:22.730736 2123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.734078 kubelet[2123]: E1101 00:17:22.734031 2123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-62dab69cc5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.734078 kubelet[2123]: I1101 00:17:22.734068 2123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:22.736863 kubelet[2123]: E1101 00:17:22.736801 2123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:23.099539 kubelet[2123]: I1101 00:17:23.099475 2123 apiserver.go:52] "Watching apiserver" Nov 1 00:17:23.113291 kubelet[2123]: I1101 00:17:23.113231 2123 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:17:24.657529 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... Nov 1 00:17:24.657966 systemd[1]: Reloading... Nov 1 00:17:24.798748 zram_generator::config[2451]: No configuration found. Nov 1 00:17:24.970149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:17:25.061341 systemd[1]: Reloading finished in 402 ms. Nov 1 00:17:25.105924 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:25.120789 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:17:25.121248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:25.121610 systemd[1]: kubelet.service: Consumed 1.346s CPU time, 118.3M memory peak, 0B memory swap peak. Nov 1 00:17:25.128953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:25.325835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:25.329592 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:17:25.407619 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:17:25.407619 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:17:25.407619 kubelet[2502]: I1101 00:17:25.408125 2502 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:17:25.427850 kubelet[2502]: I1101 00:17:25.427805 2502 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:17:25.428517 kubelet[2502]: I1101 00:17:25.428496 2502 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:17:25.430103 kubelet[2502]: I1101 00:17:25.430072 2502 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:17:25.430233 kubelet[2502]: I1101 00:17:25.430216 2502 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:17:25.430934 kubelet[2502]: I1101 00:17:25.430913 2502 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:17:25.433943 kubelet[2502]: I1101 00:17:25.433912 2502 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:17:25.437310 kubelet[2502]: I1101 00:17:25.437244 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:17:25.442958 kubelet[2502]: E1101 00:17:25.442897 2502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:17:25.443147 kubelet[2502]: I1101 00:17:25.443000 2502 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:17:25.448029 kubelet[2502]: I1101 00:17:25.447985 2502 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:17:25.450148 kubelet[2502]: I1101 00:17:25.450052 2502 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:17:25.450431 kubelet[2502]: I1101 00:17:25.450134 2502 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-62dab69cc5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:17:25.450431 kubelet[2502]: I1101 00:17:25.450431 2502 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:17:25.450593 kubelet[2502]: I1101 00:17:25.450448 2502 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:17:25.450593 kubelet[2502]: I1101 00:17:25.450491 2502 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:17:25.452222 kubelet[2502]: I1101 00:17:25.452191 2502 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:25.458698 kubelet[2502]: I1101 00:17:25.457691 2502 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:17:25.458698 kubelet[2502]: I1101 00:17:25.457739 2502 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:17:25.458698 kubelet[2502]: I1101 00:17:25.457780 2502 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:17:25.458698 kubelet[2502]: I1101 00:17:25.457800 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:17:25.464607 kubelet[2502]: I1101 00:17:25.464566 2502 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:17:25.466936 kubelet[2502]: I1101 00:17:25.466484 2502 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:17:25.466936 kubelet[2502]: I1101 00:17:25.466548 2502 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:17:25.480666 kubelet[2502]: I1101 00:17:25.479336 2502 server.go:1262] "Started kubelet" Nov 1 00:17:25.487920 kubelet[2502]: I1101 00:17:25.487838 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:17:25.489880 kubelet[2502]: I1101 00:17:25.489508 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:17:25.490783 kubelet[2502]: I1101 00:17:25.490737 2502 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:17:25.493902 kubelet[2502]: I1101 00:17:25.493868 2502 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:17:25.504012 kubelet[2502]: I1101 00:17:25.503973 2502 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:17:25.505482 kubelet[2502]: I1101 00:17:25.504499 2502 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:17:25.505482 kubelet[2502]: I1101 00:17:25.504644 2502 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:17:25.514261 kubelet[2502]: I1101 00:17:25.512752 2502 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:17:25.514261 kubelet[2502]: I1101 00:17:25.512781 2502 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:17:25.514261 kubelet[2502]: I1101 00:17:25.512874 2502 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:17:25.514261 kubelet[2502]: I1101 00:17:25.513134 2502 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:17:25.514261 kubelet[2502]: I1101 00:17:25.513207 2502 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:17:25.514261 kubelet[2502]: I1101 00:17:25.513478 2502 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:17:25.520706 kubelet[2502]: I1101 00:17:25.520573 2502 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:17:25.522809 kubelet[2502]: I1101 00:17:25.522778 2502 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:17:25.523338 kubelet[2502]: I1101 00:17:25.522947 2502 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:17:25.523338 kubelet[2502]: I1101 00:17:25.522985 2502 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:17:25.523338 kubelet[2502]: E1101 00:17:25.523043 2502 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:17:25.534920 kubelet[2502]: E1101 00:17:25.534881 2502 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588135 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588158 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588180 2502 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588344 2502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588354 2502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588390 2502 policy_none.go:49] "None policy: Start" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588420 2502 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:17:25.589169 kubelet[2502]: I1101 00:17:25.588432 2502 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:17:25.591708 kubelet[2502]: I1101 00:17:25.591103 2502 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:17:25.591708 kubelet[2502]: I1101 00:17:25.591152 2502 policy_none.go:47] "Start" Nov 1 00:17:25.602854 kubelet[2502]: E1101 00:17:25.602803 2502 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:17:25.603103 kubelet[2502]: I1101 00:17:25.603079 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:17:25.603147 kubelet[2502]: I1101 00:17:25.603106 2502 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:17:25.604033 kubelet[2502]: I1101 00:17:25.603998 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:17:25.606886 kubelet[2502]: E1101 00:17:25.606839 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:17:25.624539 kubelet[2502]: I1101 00:17:25.624402 2502 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.625592 kubelet[2502]: I1101 00:17:25.624875 2502 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.625592 kubelet[2502]: I1101 00:17:25.625194 2502 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.635444 kubelet[2502]: I1101 00:17:25.635402 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:17:25.641922 kubelet[2502]: I1101 00:17:25.641710 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:17:25.642295 kubelet[2502]: I1101 00:17:25.642278 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:17:25.706752 kubelet[2502]: I1101 00:17:25.706410 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.706752 kubelet[2502]: I1101 00:17:25.706453 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.706752 kubelet[2502]: I1101 00:17:25.706475 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.706752 kubelet[2502]: I1101 00:17:25.706502 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/935debbb5ccf49cf027aec7a9f298177-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" (UID: \"935debbb5ccf49cf027aec7a9f298177\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.706752 kubelet[2502]: I1101 00:17:25.706525 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/935debbb5ccf49cf027aec7a9f298177-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" (UID: \"935debbb5ccf49cf027aec7a9f298177\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.707130 kubelet[2502]: I1101 00:17:25.706543 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/935debbb5ccf49cf027aec7a9f298177-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-62dab69cc5\" (UID: \"935debbb5ccf49cf027aec7a9f298177\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.707130 kubelet[2502]: I1101 00:17:25.706559 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.707130 kubelet[2502]: I1101 00:17:25.706576 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abd4c9dbfc4927a8da0f37f9f9e3e2fd-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-62dab69cc5\" (UID: \"abd4c9dbfc4927a8da0f37f9f9e3e2fd\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.707130 kubelet[2502]: I1101 00:17:25.706592 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd8daf47d94895b0344c3e9fe8b82aac-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-62dab69cc5\" (UID: \"fd8daf47d94895b0344c3e9fe8b82aac\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.716696 kubelet[2502]: I1101 00:17:25.713892 2502 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.729717 kubelet[2502]: I1101 00:17:25.728990 2502 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.729717 kubelet[2502]: I1101 00:17:25.729106 2502 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:25.938869 kubelet[2502]: E1101 00:17:25.938056 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:25.943304 kubelet[2502]: E1101 00:17:25.942426 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:25.943304 kubelet[2502]: E1101 00:17:25.943010 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:26.460869 kubelet[2502]: I1101 00:17:26.460819 2502 apiserver.go:52] "Watching apiserver" Nov 1 00:17:26.503461 kubelet[2502]: I1101 00:17:26.503246 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" podStartSLOduration=1.503205068 podStartE2EDuration="1.503205068s" podCreationTimestamp="2025-11-01 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:26.503011362 +0000 UTC m=+1.166638388" watchObservedRunningTime="2025-11-01 00:17:26.503205068 +0000 UTC m=+1.166832086" Nov 1 00:17:26.505909 kubelet[2502]: I1101 00:17:26.505760 2502 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:17:26.518620 kubelet[2502]: I1101 00:17:26.518283 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-62dab69cc5" podStartSLOduration=1.5182499649999999 podStartE2EDuration="1.518249965s" podCreationTimestamp="2025-11-01 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:26.518075707 +0000 UTC m=+1.181702732" watchObservedRunningTime="2025-11-01 00:17:26.518249965 +0000 UTC m=+1.181876982" Nov 1 00:17:26.561053 kubelet[2502]: I1101 00:17:26.561021 2502 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:26.563664 kubelet[2502]: E1101 00:17:26.562943 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:26.563664 kubelet[2502]: E1101 00:17:26.563585 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:26.576376 kubelet[2502]: I1101 00:17:26.575593 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:17:26.576376 kubelet[2502]: E1101 00:17:26.575701 2502 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-62dab69cc5\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-62dab69cc5" Nov 1 00:17:26.576376 kubelet[2502]: E1101 00:17:26.575868 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:26.584356 kubelet[2502]: I1101 00:17:26.584285 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-62dab69cc5" podStartSLOduration=1.5842640289999999 podStartE2EDuration="1.584264029s" podCreationTimestamp="2025-11-01 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:26.530733758 +0000 UTC m=+1.194360783" watchObservedRunningTime="2025-11-01 00:17:26.584264029 +0000 UTC m=+1.247891068" Nov 1 00:17:27.562663 kubelet[2502]: E1101 00:17:27.562213 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:27.562663 kubelet[2502]: E1101 00:17:27.562281 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:28.564319 kubelet[2502]: E1101 00:17:28.564275 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:29.861949 kubelet[2502]: I1101 00:17:29.861898 2502 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:17:29.865689 containerd[1471]: time="2025-11-01T00:17:29.865573276Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:17:29.866422 kubelet[2502]: I1101 00:17:29.866391 2502 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:17:30.765613 systemd[1]: Created slice kubepods-besteffort-pod49823c64_fd36_4227_b056_39973242f31b.slice - libcontainer container kubepods-besteffort-pod49823c64_fd36_4227_b056_39973242f31b.slice. Nov 1 00:17:30.841756 kubelet[2502]: I1101 00:17:30.841710 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49823c64-fd36-4227-b056-39973242f31b-kube-proxy\") pod \"kube-proxy-gtzbj\" (UID: \"49823c64-fd36-4227-b056-39973242f31b\") " pod="kube-system/kube-proxy-gtzbj" Nov 1 00:17:30.841756 kubelet[2502]: I1101 00:17:30.841753 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49823c64-fd36-4227-b056-39973242f31b-xtables-lock\") pod \"kube-proxy-gtzbj\" (UID: \"49823c64-fd36-4227-b056-39973242f31b\") " pod="kube-system/kube-proxy-gtzbj" Nov 1 00:17:30.841756 kubelet[2502]: I1101 00:17:30.841774 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49823c64-fd36-4227-b056-39973242f31b-lib-modules\") pod \"kube-proxy-gtzbj\" (UID: \"49823c64-fd36-4227-b056-39973242f31b\") " pod="kube-system/kube-proxy-gtzbj" Nov 1 00:17:30.842085 kubelet[2502]: I1101 00:17:30.841793 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gggqk\" (UniqueName: \"kubernetes.io/projected/49823c64-fd36-4227-b056-39973242f31b-kube-api-access-gggqk\") pod \"kube-proxy-gtzbj\" (UID: \"49823c64-fd36-4227-b056-39973242f31b\") " pod="kube-system/kube-proxy-gtzbj" Nov 1 00:17:31.019080 systemd[1]: Created slice kubepods-besteffort-pod6869fc58_ac44_460f_9b01_97570779d9f4.slice - libcontainer container kubepods-besteffort-pod6869fc58_ac44_460f_9b01_97570779d9f4.slice. Nov 1 00:17:31.044326 kubelet[2502]: I1101 00:17:31.044254 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbhq\" (UniqueName: \"kubernetes.io/projected/6869fc58-ac44-460f-9b01-97570779d9f4-kube-api-access-wwbhq\") pod \"tigera-operator-65cdcdfd6d-nvrc6\" (UID: \"6869fc58-ac44-460f-9b01-97570779d9f4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nvrc6" Nov 1 00:17:31.044326 kubelet[2502]: I1101 00:17:31.044320 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6869fc58-ac44-460f-9b01-97570779d9f4-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-nvrc6\" (UID: \"6869fc58-ac44-460f-9b01-97570779d9f4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nvrc6" Nov 1 00:17:31.078677 kubelet[2502]: E1101 00:17:31.078308 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:31.080375 containerd[1471]: time="2025-11-01T00:17:31.080286284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtzbj,Uid:49823c64-fd36-4227-b056-39973242f31b,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:31.114903 containerd[1471]: time="2025-11-01T00:17:31.114536858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:31.114903 containerd[1471]: time="2025-11-01T00:17:31.114675254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:31.114903 containerd[1471]: time="2025-11-01T00:17:31.114698276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:31.114903 containerd[1471]: time="2025-11-01T00:17:31.114839430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:31.150092 systemd[1]: Started cri-containerd-8fb0cdc7cf50aea6c97e7867c789bdad6b2d7c3a646d1e30acf6c2b793f1773b.scope - libcontainer container 8fb0cdc7cf50aea6c97e7867c789bdad6b2d7c3a646d1e30acf6c2b793f1773b. Nov 1 00:17:31.188124 containerd[1471]: time="2025-11-01T00:17:31.188071561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtzbj,Uid:49823c64-fd36-4227-b056-39973242f31b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fb0cdc7cf50aea6c97e7867c789bdad6b2d7c3a646d1e30acf6c2b793f1773b\"" Nov 1 00:17:31.189731 kubelet[2502]: E1101 00:17:31.189697 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:31.202413 containerd[1471]: time="2025-11-01T00:17:31.201867440Z" level=info msg="CreateContainer within sandbox \"8fb0cdc7cf50aea6c97e7867c789bdad6b2d7c3a646d1e30acf6c2b793f1773b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:17:31.224996 containerd[1471]: time="2025-11-01T00:17:31.224484119Z" level=info msg="CreateContainer within sandbox \"8fb0cdc7cf50aea6c97e7867c789bdad6b2d7c3a646d1e30acf6c2b793f1773b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2365a9dbdb3b379ef276b4bd19482440e8eb383948c7707ed199342ce5dfa5d\"" Nov 1 00:17:31.225991 containerd[1471]: time="2025-11-01T00:17:31.225662398Z" level=info msg="StartContainer for \"c2365a9dbdb3b379ef276b4bd19482440e8eb383948c7707ed199342ce5dfa5d\"" Nov 1 00:17:31.259878 systemd[1]: Started cri-containerd-c2365a9dbdb3b379ef276b4bd19482440e8eb383948c7707ed199342ce5dfa5d.scope - libcontainer container c2365a9dbdb3b379ef276b4bd19482440e8eb383948c7707ed199342ce5dfa5d. Nov 1 00:17:31.301244 containerd[1471]: time="2025-11-01T00:17:31.300905134Z" level=info msg="StartContainer for \"c2365a9dbdb3b379ef276b4bd19482440e8eb383948c7707ed199342ce5dfa5d\" returns successfully" Nov 1 00:17:31.330157 containerd[1471]: time="2025-11-01T00:17:31.330091497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nvrc6,Uid:6869fc58-ac44-460f-9b01-97570779d9f4,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:17:31.368797 containerd[1471]: time="2025-11-01T00:17:31.367794826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:31.368797 containerd[1471]: time="2025-11-01T00:17:31.367961359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:31.368797 containerd[1471]: time="2025-11-01T00:17:31.367999731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:31.368797 containerd[1471]: time="2025-11-01T00:17:31.368424512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:31.395867 systemd[1]: Started cri-containerd-94ffb8170296d71fff05bc035f66a4470664e9087294d035a05a6a0344517159.scope - libcontainer container 94ffb8170296d71fff05bc035f66a4470664e9087294d035a05a6a0344517159. Nov 1 00:17:31.482851 containerd[1471]: time="2025-11-01T00:17:31.482229193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nvrc6,Uid:6869fc58-ac44-460f-9b01-97570779d9f4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"94ffb8170296d71fff05bc035f66a4470664e9087294d035a05a6a0344517159\"" Nov 1 00:17:31.487100 containerd[1471]: time="2025-11-01T00:17:31.486824166Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:17:31.526393 systemd-timesyncd[1343]: Contacted time server 172.235.60.8:123 (2.flatcar.pool.ntp.org). Nov 1 00:17:31.529829 systemd-timesyncd[1343]: Initial clock synchronization to Sat 2025-11-01 00:17:31.896207 UTC. Nov 1 00:17:31.574378 kubelet[2502]: E1101 00:17:31.573991 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:32.015437 kubelet[2502]: E1101 00:17:32.015367 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:32.048823 kubelet[2502]: I1101 00:17:32.048368 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gtzbj" podStartSLOduration=2.048345032 podStartE2EDuration="2.048345032s" podCreationTimestamp="2025-11-01 00:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:31.599539682 +0000 UTC m=+6.263166717" watchObservedRunningTime="2025-11-01 00:17:32.048345032 +0000 UTC m=+6.711972067" Nov 1 00:17:32.576734 kubelet[2502]: E1101 00:17:32.576422 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:32.786567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019382864.mount: Deactivated successfully. Nov 1 00:17:33.296223 systemd[1]: Started sshd@7-146.190.126.63:22-37.235.132.169:59211.service - OpenSSH per-connection server daemon (37.235.132.169:59211). Nov 1 00:17:33.448500 sshd[2808]: banner exchange: Connection from 37.235.132.169 port 59211: invalid format Nov 1 00:17:33.451497 systemd[1]: sshd@7-146.190.126.63:22-37.235.132.169:59211.service: Deactivated successfully. Nov 1 00:17:33.583618 kubelet[2502]: E1101 00:17:33.583153 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:33.617936 containerd[1471]: time="2025-11-01T00:17:33.617862005Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:33.619408 containerd[1471]: time="2025-11-01T00:17:33.619203061Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:17:33.621271 containerd[1471]: time="2025-11-01T00:17:33.620016814Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:33.623038 containerd[1471]: time="2025-11-01T00:17:33.622987531Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:33.623738 containerd[1471]: time="2025-11-01T00:17:33.623708342Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.136838333s" Nov 1 00:17:33.623841 containerd[1471]: time="2025-11-01T00:17:33.623826607Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:17:33.632043 containerd[1471]: time="2025-11-01T00:17:33.631999651Z" level=info msg="CreateContainer within sandbox \"94ffb8170296d71fff05bc035f66a4470664e9087294d035a05a6a0344517159\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:17:33.652068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount551584434.mount: Deactivated successfully. Nov 1 00:17:33.655736 containerd[1471]: time="2025-11-01T00:17:33.655628876Z" level=info msg="CreateContainer within sandbox \"94ffb8170296d71fff05bc035f66a4470664e9087294d035a05a6a0344517159\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fdddb3ffa55b205e90d314a1f792700ffd5d2652a5eefc060cc044891c507916\"" Nov 1 00:17:33.658078 containerd[1471]: time="2025-11-01T00:17:33.657396600Z" level=info msg="StartContainer for \"fdddb3ffa55b205e90d314a1f792700ffd5d2652a5eefc060cc044891c507916\"" Nov 1 00:17:33.708011 systemd[1]: Started cri-containerd-fdddb3ffa55b205e90d314a1f792700ffd5d2652a5eefc060cc044891c507916.scope - libcontainer container fdddb3ffa55b205e90d314a1f792700ffd5d2652a5eefc060cc044891c507916. Nov 1 00:17:33.717691 kubelet[2502]: E1101 00:17:33.717630 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:33.768358 containerd[1471]: time="2025-11-01T00:17:33.768046429Z" level=info msg="StartContainer for \"fdddb3ffa55b205e90d314a1f792700ffd5d2652a5eefc060cc044891c507916\" returns successfully" Nov 1 00:17:34.585717 kubelet[2502]: E1101 00:17:34.585625 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:34.613109 kubelet[2502]: I1101 00:17:34.611179 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-nvrc6" podStartSLOduration=2.471747917 podStartE2EDuration="4.611161264s" podCreationTimestamp="2025-11-01 00:17:30 +0000 UTC" firstStartedPulling="2025-11-01 00:17:31.486161674 +0000 UTC m=+6.149788690" lastFinishedPulling="2025-11-01 00:17:33.625575032 +0000 UTC m=+8.289202037" observedRunningTime="2025-11-01 00:17:34.611145024 +0000 UTC m=+9.274772059" watchObservedRunningTime="2025-11-01 00:17:34.611161264 +0000 UTC m=+9.274788295" Nov 1 00:17:36.589535 kubelet[2502]: E1101 00:17:36.587990 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:37.594983 kubelet[2502]: E1101 00:17:37.594944 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:39.418361 update_engine[1447]: I20251101 00:17:39.418172 1447 update_attempter.cc:509] Updating boot flags... Nov 1 00:17:39.451690 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2885) Nov 1 00:17:39.543976 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2887) Nov 1 00:17:40.259601 sudo[1652]: pam_unix(sudo:session): session closed for user root Nov 1 00:17:40.268850 sshd[1649]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:40.275593 systemd[1]: sshd@6-146.190.126.63:22-139.178.68.195:38200.service: Deactivated successfully. Nov 1 00:17:40.280244 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:17:40.280898 systemd[1]: session-7.scope: Consumed 6.686s CPU time, 148.0M memory peak, 0B memory swap peak. Nov 1 00:17:40.283551 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:17:40.285565 systemd-logind[1445]: Removed session 7. Nov 1 00:17:46.407516 systemd[1]: Created slice kubepods-besteffort-pod9f8717e8_588b_47b6_afb7_d614addd891d.slice - libcontainer container kubepods-besteffort-pod9f8717e8_588b_47b6_afb7_d614addd891d.slice. Nov 1 00:17:46.448947 kubelet[2502]: I1101 00:17:46.448705 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f8717e8-588b-47b6-afb7-d614addd891d-tigera-ca-bundle\") pod \"calico-typha-5c64cc5fbf-smnn6\" (UID: \"9f8717e8-588b-47b6-afb7-d614addd891d\") " pod="calico-system/calico-typha-5c64cc5fbf-smnn6" Nov 1 00:17:46.448947 kubelet[2502]: I1101 00:17:46.448838 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f8717e8-588b-47b6-afb7-d614addd891d-typha-certs\") pod \"calico-typha-5c64cc5fbf-smnn6\" (UID: \"9f8717e8-588b-47b6-afb7-d614addd891d\") " pod="calico-system/calico-typha-5c64cc5fbf-smnn6" Nov 1 00:17:46.448947 kubelet[2502]: I1101 00:17:46.448884 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fsn7\" (UniqueName: \"kubernetes.io/projected/9f8717e8-588b-47b6-afb7-d614addd891d-kube-api-access-7fsn7\") pod \"calico-typha-5c64cc5fbf-smnn6\" (UID: \"9f8717e8-588b-47b6-afb7-d614addd891d\") " pod="calico-system/calico-typha-5c64cc5fbf-smnn6" Nov 1 00:17:46.611594 systemd[1]: Created slice kubepods-besteffort-podafc38e5c_e562_4e75_8d64_790492a79426.slice - libcontainer container kubepods-besteffort-podafc38e5c_e562_4e75_8d64_790492a79426.slice. Nov 1 00:17:46.650042 kubelet[2502]: I1101 00:17:46.649896 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-lib-modules\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650042 kubelet[2502]: I1101 00:17:46.649955 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-policysync\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650042 kubelet[2502]: I1101 00:17:46.650025 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-var-lib-calico\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650292 kubelet[2502]: I1101 00:17:46.650081 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-flexvol-driver-host\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650292 kubelet[2502]: I1101 00:17:46.650101 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2lwp\" (UniqueName: \"kubernetes.io/projected/afc38e5c-e562-4e75-8d64-790492a79426-kube-api-access-n2lwp\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650292 kubelet[2502]: I1101 00:17:46.650119 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-xtables-lock\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650292 kubelet[2502]: I1101 00:17:46.650135 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-cni-bin-dir\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650292 kubelet[2502]: I1101 00:17:46.650160 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/afc38e5c-e562-4e75-8d64-790492a79426-node-certs\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650414 kubelet[2502]: I1101 00:17:46.650177 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-cni-log-dir\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650414 kubelet[2502]: I1101 00:17:46.650191 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afc38e5c-e562-4e75-8d64-790492a79426-tigera-ca-bundle\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650414 kubelet[2502]: I1101 00:17:46.650209 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-cni-net-dir\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.650414 kubelet[2502]: I1101 00:17:46.650226 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/afc38e5c-e562-4e75-8d64-790492a79426-var-run-calico\") pod \"calico-node-tv8zb\" (UID: \"afc38e5c-e562-4e75-8d64-790492a79426\") " pod="calico-system/calico-node-tv8zb" Nov 1 00:17:46.726134 kubelet[2502]: E1101 00:17:46.723507 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:46.727069 containerd[1471]: time="2025-11-01T00:17:46.726835135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c64cc5fbf-smnn6,Uid:9f8717e8-588b-47b6-afb7-d614addd891d,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:46.756746 kubelet[2502]: E1101 00:17:46.756513 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.756746 kubelet[2502]: W1101 00:17:46.756542 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.757997 kubelet[2502]: E1101 00:17:46.756565 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.762136 kubelet[2502]: E1101 00:17:46.758913 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.762136 kubelet[2502]: W1101 00:17:46.761993 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.762136 kubelet[2502]: E1101 00:17:46.762028 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.763910 kubelet[2502]: E1101 00:17:46.763418 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.763910 kubelet[2502]: W1101 00:17:46.763437 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.763910 kubelet[2502]: E1101 00:17:46.763455 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.764259 kubelet[2502]: E1101 00:17:46.764139 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.764259 kubelet[2502]: W1101 00:17:46.764153 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.764259 kubelet[2502]: E1101 00:17:46.764167 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.765453 kubelet[2502]: E1101 00:17:46.765230 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.765453 kubelet[2502]: W1101 00:17:46.765271 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.765453 kubelet[2502]: E1101 00:17:46.765286 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.765739 kubelet[2502]: E1101 00:17:46.765611 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.765739 kubelet[2502]: W1101 00:17:46.765621 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.765739 kubelet[2502]: E1101 00:17:46.765662 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.766072 kubelet[2502]: E1101 00:17:46.765933 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.766072 kubelet[2502]: W1101 00:17:46.765944 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.766072 kubelet[2502]: E1101 00:17:46.765961 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.767414 kubelet[2502]: E1101 00:17:46.767201 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.767414 kubelet[2502]: W1101 00:17:46.767220 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.767414 kubelet[2502]: E1101 00:17:46.767233 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.768495 kubelet[2502]: E1101 00:17:46.768252 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.768495 kubelet[2502]: W1101 00:17:46.768271 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.768495 kubelet[2502]: E1101 00:17:46.768287 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.769477 kubelet[2502]: E1101 00:17:46.769205 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.769477 kubelet[2502]: W1101 00:17:46.769225 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.769477 kubelet[2502]: E1101 00:17:46.769242 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.770860 kubelet[2502]: E1101 00:17:46.769733 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.770860 kubelet[2502]: W1101 00:17:46.769747 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.770860 kubelet[2502]: E1101 00:17:46.769762 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.771532 kubelet[2502]: E1101 00:17:46.771086 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.771532 kubelet[2502]: W1101 00:17:46.771103 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.771532 kubelet[2502]: E1101 00:17:46.771120 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.772045 kubelet[2502]: E1101 00:17:46.772027 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.772950 kubelet[2502]: W1101 00:17:46.772748 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.772950 kubelet[2502]: E1101 00:17:46.772772 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.773204 kubelet[2502]: E1101 00:17:46.773192 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.773267 kubelet[2502]: W1101 00:17:46.773258 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.773349 kubelet[2502]: E1101 00:17:46.773340 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.774061 kubelet[2502]: E1101 00:17:46.774047 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.774190 kubelet[2502]: W1101 00:17:46.774150 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.774285 kubelet[2502]: E1101 00:17:46.774273 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.776155 kubelet[2502]: E1101 00:17:46.775747 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.776155 kubelet[2502]: W1101 00:17:46.775764 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.776155 kubelet[2502]: E1101 00:17:46.775776 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.777554 kubelet[2502]: E1101 00:17:46.777230 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.777554 kubelet[2502]: W1101 00:17:46.777316 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.777554 kubelet[2502]: E1101 00:17:46.777333 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.778066 kubelet[2502]: E1101 00:17:46.778000 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.778066 kubelet[2502]: W1101 00:17:46.778016 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.778066 kubelet[2502]: E1101 00:17:46.778033 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.787843 containerd[1471]: time="2025-11-01T00:17:46.785798162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:46.787843 containerd[1471]: time="2025-11-01T00:17:46.785875909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:46.787843 containerd[1471]: time="2025-11-01T00:17:46.785923641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:46.788271 containerd[1471]: time="2025-11-01T00:17:46.787920158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:46.822103 kubelet[2502]: E1101 00:17:46.822053 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:46.829708 kubelet[2502]: E1101 00:17:46.829578 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.829708 kubelet[2502]: W1101 00:17:46.829601 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.829863 systemd[1]: Started cri-containerd-6a0b7996e5ecd6955589d7ef8cd4c513174563b3fc4a141157e8cb81a2f07b59.scope - libcontainer container 6a0b7996e5ecd6955589d7ef8cd4c513174563b3fc4a141157e8cb81a2f07b59. Nov 1 00:17:46.831678 kubelet[2502]: E1101 00:17:46.830112 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.840591 kubelet[2502]: E1101 00:17:46.840556 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.841924 kubelet[2502]: W1101 00:17:46.841687 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.841924 kubelet[2502]: E1101 00:17:46.841721 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.842434 kubelet[2502]: E1101 00:17:46.842333 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.842767 kubelet[2502]: W1101 00:17:46.842516 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.842767 kubelet[2502]: E1101 00:17:46.842553 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.842886 kubelet[2502]: E1101 00:17:46.842870 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.843037 kubelet[2502]: W1101 00:17:46.842906 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.843037 kubelet[2502]: E1101 00:17:46.842922 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.843863 kubelet[2502]: E1101 00:17:46.843811 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.843863 kubelet[2502]: W1101 00:17:46.843828 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.843863 kubelet[2502]: E1101 00:17:46.843849 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.844081 kubelet[2502]: E1101 00:17:46.844057 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.844081 kubelet[2502]: W1101 00:17:46.844069 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.844081 kubelet[2502]: E1101 00:17:46.844078 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.844726 kubelet[2502]: E1101 00:17:46.844709 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.844803 kubelet[2502]: W1101 00:17:46.844740 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.844803 kubelet[2502]: E1101 00:17:46.844753 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.844962 kubelet[2502]: E1101 00:17:46.844948 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.844962 kubelet[2502]: W1101 00:17:46.844959 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.845716 kubelet[2502]: E1101 00:17:46.845691 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.846033 kubelet[2502]: E1101 00:17:46.845963 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.846033 kubelet[2502]: W1101 00:17:46.845976 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.846033 kubelet[2502]: E1101 00:17:46.845987 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.846242 kubelet[2502]: E1101 00:17:46.846221 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.846521 kubelet[2502]: W1101 00:17:46.846248 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.846521 kubelet[2502]: E1101 00:17:46.846258 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.846724 kubelet[2502]: E1101 00:17:46.846709 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.846754 kubelet[2502]: W1101 00:17:46.846725 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.846754 kubelet[2502]: E1101 00:17:46.846735 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.847828 kubelet[2502]: E1101 00:17:46.847802 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.847828 kubelet[2502]: W1101 00:17:46.847817 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.847934 kubelet[2502]: E1101 00:17:46.847838 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.848027 kubelet[2502]: E1101 00:17:46.848010 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.848027 kubelet[2502]: W1101 00:17:46.848017 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.848027 kubelet[2502]: E1101 00:17:46.848026 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.848622 kubelet[2502]: E1101 00:17:46.848181 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.848622 kubelet[2502]: W1101 00:17:46.848187 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.848622 kubelet[2502]: E1101 00:17:46.848196 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.848622 kubelet[2502]: E1101 00:17:46.848320 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.848622 kubelet[2502]: W1101 00:17:46.848327 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.848622 kubelet[2502]: E1101 00:17:46.848335 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.848824 kubelet[2502]: E1101 00:17:46.848792 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.848824 kubelet[2502]: W1101 00:17:46.848802 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.848824 kubelet[2502]: E1101 00:17:46.848812 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.849876 kubelet[2502]: E1101 00:17:46.849855 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.849876 kubelet[2502]: W1101 00:17:46.849875 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.850025 kubelet[2502]: E1101 00:17:46.849890 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.850251 kubelet[2502]: E1101 00:17:46.850086 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.850251 kubelet[2502]: W1101 00:17:46.850093 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.850251 kubelet[2502]: E1101 00:17:46.850102 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.850455 kubelet[2502]: E1101 00:17:46.850277 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.850455 kubelet[2502]: W1101 00:17:46.850291 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.850455 kubelet[2502]: E1101 00:17:46.850302 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.850812 kubelet[2502]: E1101 00:17:46.850796 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.850812 kubelet[2502]: W1101 00:17:46.850811 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.850896 kubelet[2502]: E1101 00:17:46.850822 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.851690 kubelet[2502]: E1101 00:17:46.851673 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.851690 kubelet[2502]: W1101 00:17:46.851689 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.851690 kubelet[2502]: E1101 00:17:46.851701 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.853517 kubelet[2502]: E1101 00:17:46.853498 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.853853 kubelet[2502]: W1101 00:17:46.853650 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.853853 kubelet[2502]: E1101 00:17:46.853670 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.853853 kubelet[2502]: I1101 00:17:46.853696 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fcdc505d-5cce-492c-9f5d-b001efaf66ff-kubelet-dir\") pod \"csi-node-driver-ntlzm\" (UID: \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\") " pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:46.854531 kubelet[2502]: E1101 00:17:46.854353 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.854531 kubelet[2502]: W1101 00:17:46.854368 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.854531 kubelet[2502]: E1101 00:17:46.854381 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.854531 kubelet[2502]: I1101 00:17:46.854413 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fcdc505d-5cce-492c-9f5d-b001efaf66ff-socket-dir\") pod \"csi-node-driver-ntlzm\" (UID: \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\") " pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:46.854863 kubelet[2502]: E1101 00:17:46.854823 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.854963 kubelet[2502]: W1101 00:17:46.854943 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.855136 kubelet[2502]: E1101 00:17:46.855121 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.855193 kubelet[2502]: I1101 00:17:46.855181 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbvpv\" (UniqueName: \"kubernetes.io/projected/fcdc505d-5cce-492c-9f5d-b001efaf66ff-kube-api-access-xbvpv\") pod \"csi-node-driver-ntlzm\" (UID: \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\") " pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:46.855928 kubelet[2502]: E1101 00:17:46.855674 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.855928 kubelet[2502]: W1101 00:17:46.855698 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.855928 kubelet[2502]: E1101 00:17:46.855710 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.855928 kubelet[2502]: I1101 00:17:46.855736 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fcdc505d-5cce-492c-9f5d-b001efaf66ff-registration-dir\") pod \"csi-node-driver-ntlzm\" (UID: \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\") " pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:46.856248 kubelet[2502]: E1101 00:17:46.856225 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.856548 kubelet[2502]: W1101 00:17:46.856390 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.856548 kubelet[2502]: E1101 00:17:46.856408 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.857260 kubelet[2502]: I1101 00:17:46.856985 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fcdc505d-5cce-492c-9f5d-b001efaf66ff-varrun\") pod \"csi-node-driver-ntlzm\" (UID: \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\") " pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:46.857260 kubelet[2502]: E1101 00:17:46.857127 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.857260 kubelet[2502]: W1101 00:17:46.857137 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.857260 kubelet[2502]: E1101 00:17:46.857152 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.857606 kubelet[2502]: E1101 00:17:46.857594 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.857730 kubelet[2502]: W1101 00:17:46.857706 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.857888 kubelet[2502]: E1101 00:17:46.857804 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.858218 kubelet[2502]: E1101 00:17:46.858206 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.858379 kubelet[2502]: W1101 00:17:46.858293 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.858379 kubelet[2502]: E1101 00:17:46.858309 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.859859 kubelet[2502]: E1101 00:17:46.859774 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.859859 kubelet[2502]: W1101 00:17:46.859790 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.859859 kubelet[2502]: E1101 00:17:46.859808 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.860290 kubelet[2502]: E1101 00:17:46.860188 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.860290 kubelet[2502]: W1101 00:17:46.860204 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.860290 kubelet[2502]: E1101 00:17:46.860219 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.860779 kubelet[2502]: E1101 00:17:46.860597 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.860779 kubelet[2502]: W1101 00:17:46.860608 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.860779 kubelet[2502]: E1101 00:17:46.860620 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.860911 kubelet[2502]: E1101 00:17:46.860902 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.861024 kubelet[2502]: W1101 00:17:46.860960 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.861024 kubelet[2502]: E1101 00:17:46.860975 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.861413 kubelet[2502]: E1101 00:17:46.861309 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.861413 kubelet[2502]: W1101 00:17:46.861320 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.861413 kubelet[2502]: E1101 00:17:46.861330 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.861551 kubelet[2502]: E1101 00:17:46.861543 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.861668 kubelet[2502]: W1101 00:17:46.861596 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.861668 kubelet[2502]: E1101 00:17:46.861609 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.861965 kubelet[2502]: E1101 00:17:46.861906 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.861965 kubelet[2502]: W1101 00:17:46.861917 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.861965 kubelet[2502]: E1101 00:17:46.861927 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.918075 kubelet[2502]: E1101 00:17:46.918031 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:46.919672 containerd[1471]: time="2025-11-01T00:17:46.919604266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tv8zb,Uid:afc38e5c-e562-4e75-8d64-790492a79426,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:46.958275 kubelet[2502]: E1101 00:17:46.958133 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.958275 kubelet[2502]: W1101 00:17:46.958158 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.958275 kubelet[2502]: E1101 00:17:46.958178 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.958902 kubelet[2502]: E1101 00:17:46.958666 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.958902 kubelet[2502]: W1101 00:17:46.958683 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.958902 kubelet[2502]: E1101 00:17:46.958695 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.959386 kubelet[2502]: E1101 00:17:46.959196 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.959386 kubelet[2502]: W1101 00:17:46.959211 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.959386 kubelet[2502]: E1101 00:17:46.959223 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.959868 kubelet[2502]: E1101 00:17:46.959777 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.959868 kubelet[2502]: W1101 00:17:46.959792 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.959868 kubelet[2502]: E1101 00:17:46.959805 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.960356 kubelet[2502]: E1101 00:17:46.960321 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.960356 kubelet[2502]: W1101 00:17:46.960336 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.961035 kubelet[2502]: E1101 00:17:46.960448 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.961035 kubelet[2502]: E1101 00:17:46.960949 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.961035 kubelet[2502]: W1101 00:17:46.960959 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.961035 kubelet[2502]: E1101 00:17:46.960970 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.961611 kubelet[2502]: E1101 00:17:46.961411 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.961611 kubelet[2502]: W1101 00:17:46.961427 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.961611 kubelet[2502]: E1101 00:17:46.961439 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.962715 kubelet[2502]: E1101 00:17:46.961946 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.962715 kubelet[2502]: W1101 00:17:46.961961 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.962715 kubelet[2502]: E1101 00:17:46.961974 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.962715 kubelet[2502]: E1101 00:17:46.962415 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.962715 kubelet[2502]: W1101 00:17:46.962425 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.962715 kubelet[2502]: E1101 00:17:46.962435 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.962911 kubelet[2502]: E1101 00:17:46.962789 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.962911 kubelet[2502]: W1101 00:17:46.962799 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.962911 kubelet[2502]: E1101 00:17:46.962810 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.963533 kubelet[2502]: E1101 00:17:46.963260 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.963533 kubelet[2502]: W1101 00:17:46.963275 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.963533 kubelet[2502]: E1101 00:17:46.963285 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.963762 kubelet[2502]: E1101 00:17:46.963738 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.963762 kubelet[2502]: W1101 00:17:46.963748 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.963762 kubelet[2502]: E1101 00:17:46.963758 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.964312 kubelet[2502]: E1101 00:17:46.964249 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.964312 kubelet[2502]: W1101 00:17:46.964266 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.964312 kubelet[2502]: E1101 00:17:46.964277 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.964744 kubelet[2502]: E1101 00:17:46.964725 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.964744 kubelet[2502]: W1101 00:17:46.964740 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.964826 kubelet[2502]: E1101 00:17:46.964752 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.965246 kubelet[2502]: E1101 00:17:46.965200 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.965246 kubelet[2502]: W1101 00:17:46.965216 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.965246 kubelet[2502]: E1101 00:17:46.965227 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.965723 kubelet[2502]: E1101 00:17:46.965706 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.965723 kubelet[2502]: W1101 00:17:46.965721 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.965915 kubelet[2502]: E1101 00:17:46.965734 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.966619 kubelet[2502]: E1101 00:17:46.966597 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.966619 kubelet[2502]: W1101 00:17:46.966613 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.966841 kubelet[2502]: E1101 00:17:46.966624 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.967005 kubelet[2502]: E1101 00:17:46.966987 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.967005 kubelet[2502]: W1101 00:17:46.967003 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.967671 kubelet[2502]: E1101 00:17:46.967101 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.968553 kubelet[2502]: E1101 00:17:46.967876 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.968553 kubelet[2502]: W1101 00:17:46.967892 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.968553 kubelet[2502]: E1101 00:17:46.967904 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.969537 kubelet[2502]: E1101 00:17:46.969512 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.969537 kubelet[2502]: W1101 00:17:46.969533 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.969779 kubelet[2502]: E1101 00:17:46.969549 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.971085 kubelet[2502]: E1101 00:17:46.970394 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.971085 kubelet[2502]: W1101 00:17:46.970414 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.971085 kubelet[2502]: E1101 00:17:46.970426 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.972205 kubelet[2502]: E1101 00:17:46.972035 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.972205 kubelet[2502]: W1101 00:17:46.972051 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.972205 kubelet[2502]: E1101 00:17:46.972066 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.973538 kubelet[2502]: E1101 00:17:46.972804 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.973538 kubelet[2502]: W1101 00:17:46.972820 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.973538 kubelet[2502]: E1101 00:17:46.972833 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.973751 containerd[1471]: time="2025-11-01T00:17:46.973287189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:46.973751 containerd[1471]: time="2025-11-01T00:17:46.973364403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:46.973751 containerd[1471]: time="2025-11-01T00:17:46.973382174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:46.973845 kubelet[2502]: E1101 00:17:46.973732 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.973845 kubelet[2502]: W1101 00:17:46.973752 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.973845 kubelet[2502]: E1101 00:17:46.973771 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.976885 kubelet[2502]: E1101 00:17:46.976801 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:46.976885 kubelet[2502]: W1101 00:17:46.976820 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:46.976885 kubelet[2502]: E1101 00:17:46.976834 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:46.986577 containerd[1471]: time="2025-11-01T00:17:46.983009586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:47.001408 kubelet[2502]: E1101 00:17:47.001366 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:47.001408 kubelet[2502]: W1101 00:17:47.001392 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:47.001408 kubelet[2502]: E1101 00:17:47.001414 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:47.016957 containerd[1471]: time="2025-11-01T00:17:47.016839437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c64cc5fbf-smnn6,Uid:9f8717e8-588b-47b6-afb7-d614addd891d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a0b7996e5ecd6955589d7ef8cd4c513174563b3fc4a141157e8cb81a2f07b59\"" Nov 1 00:17:47.022507 kubelet[2502]: E1101 00:17:47.022345 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:47.027337 containerd[1471]: time="2025-11-01T00:17:47.027113433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:17:47.044510 systemd[1]: Started cri-containerd-f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4.scope - libcontainer container f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4. Nov 1 00:17:47.087657 containerd[1471]: time="2025-11-01T00:17:47.087589830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tv8zb,Uid:afc38e5c-e562-4e75-8d64-790492a79426,Namespace:calico-system,Attempt:0,} returns sandbox id \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\"" Nov 1 00:17:47.089980 kubelet[2502]: E1101 00:17:47.088827 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:48.320349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475263903.mount: Deactivated successfully. Nov 1 00:17:48.525214 kubelet[2502]: E1101 00:17:48.525143 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:49.451155 containerd[1471]: time="2025-11-01T00:17:49.451086634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:49.452446 containerd[1471]: time="2025-11-01T00:17:49.452398585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:17:49.453220 containerd[1471]: time="2025-11-01T00:17:49.453185318Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:49.455665 containerd[1471]: time="2025-11-01T00:17:49.455062746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:49.455890 containerd[1471]: time="2025-11-01T00:17:49.455863962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.4287109s" Nov 1 00:17:49.455959 containerd[1471]: time="2025-11-01T00:17:49.455947005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:17:49.458384 containerd[1471]: time="2025-11-01T00:17:49.458345806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:17:49.517609 containerd[1471]: time="2025-11-01T00:17:49.517559953Z" level=info msg="CreateContainer within sandbox \"6a0b7996e5ecd6955589d7ef8cd4c513174563b3fc4a141157e8cb81a2f07b59\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:17:49.541513 containerd[1471]: time="2025-11-01T00:17:49.541468895Z" level=info msg="CreateContainer within sandbox \"6a0b7996e5ecd6955589d7ef8cd4c513174563b3fc4a141157e8cb81a2f07b59\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ee88e0bc9532212701621a255571ae139330a5ca446d4bc21a1498db7f37bc22\"" Nov 1 00:17:49.542530 containerd[1471]: time="2025-11-01T00:17:49.542490519Z" level=info msg="StartContainer for \"ee88e0bc9532212701621a255571ae139330a5ca446d4bc21a1498db7f37bc22\"" Nov 1 00:17:49.597250 systemd[1]: Started cri-containerd-ee88e0bc9532212701621a255571ae139330a5ca446d4bc21a1498db7f37bc22.scope - libcontainer container ee88e0bc9532212701621a255571ae139330a5ca446d4bc21a1498db7f37bc22. Nov 1 00:17:49.683415 containerd[1471]: time="2025-11-01T00:17:49.682844456Z" level=info msg="StartContainer for \"ee88e0bc9532212701621a255571ae139330a5ca446d4bc21a1498db7f37bc22\" returns successfully" Nov 1 00:17:50.524311 kubelet[2502]: E1101 00:17:50.524208 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:50.646748 kubelet[2502]: E1101 00:17:50.643343 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:50.666153 kubelet[2502]: I1101 00:17:50.666087 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c64cc5fbf-smnn6" podStartSLOduration=2.234716518 podStartE2EDuration="4.666068137s" podCreationTimestamp="2025-11-01 00:17:46 +0000 UTC" firstStartedPulling="2025-11-01 00:17:47.025566919 +0000 UTC m=+21.689193922" lastFinishedPulling="2025-11-01 00:17:49.456918539 +0000 UTC m=+24.120545541" observedRunningTime="2025-11-01 00:17:50.665868753 +0000 UTC m=+25.329495778" watchObservedRunningTime="2025-11-01 00:17:50.666068137 +0000 UTC m=+25.329695164" Nov 1 00:17:50.687042 kubelet[2502]: E1101 00:17:50.686878 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.688518 kubelet[2502]: W1101 00:17:50.687095 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.688518 kubelet[2502]: E1101 00:17:50.687129 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.689005 kubelet[2502]: E1101 00:17:50.688802 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.689005 kubelet[2502]: W1101 00:17:50.688836 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.689005 kubelet[2502]: E1101 00:17:50.688859 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.689598 kubelet[2502]: E1101 00:17:50.689413 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.689598 kubelet[2502]: W1101 00:17:50.689427 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.689598 kubelet[2502]: E1101 00:17:50.689442 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.690205 kubelet[2502]: E1101 00:17:50.690087 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.690205 kubelet[2502]: W1101 00:17:50.690102 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.690205 kubelet[2502]: E1101 00:17:50.690131 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.691088 kubelet[2502]: E1101 00:17:50.690935 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.691088 kubelet[2502]: W1101 00:17:50.690948 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.691088 kubelet[2502]: E1101 00:17:50.690960 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.691944 kubelet[2502]: E1101 00:17:50.691828 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.691944 kubelet[2502]: W1101 00:17:50.691844 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.691944 kubelet[2502]: E1101 00:17:50.691856 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.692748 kubelet[2502]: E1101 00:17:50.692604 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.692748 kubelet[2502]: W1101 00:17:50.692617 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.692748 kubelet[2502]: E1101 00:17:50.692640 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.693151 kubelet[2502]: E1101 00:17:50.693065 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.693151 kubelet[2502]: W1101 00:17:50.693076 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.693151 kubelet[2502]: E1101 00:17:50.693086 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.693750 kubelet[2502]: E1101 00:17:50.693602 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.693750 kubelet[2502]: W1101 00:17:50.693614 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.693750 kubelet[2502]: E1101 00:17:50.693624 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.694079 kubelet[2502]: E1101 00:17:50.693961 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.694079 kubelet[2502]: W1101 00:17:50.693972 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.694079 kubelet[2502]: E1101 00:17:50.693982 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.694490 kubelet[2502]: E1101 00:17:50.694445 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.694662 kubelet[2502]: W1101 00:17:50.694552 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.694662 kubelet[2502]: E1101 00:17:50.694566 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.695426 kubelet[2502]: E1101 00:17:50.695244 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.695426 kubelet[2502]: W1101 00:17:50.695257 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.695426 kubelet[2502]: E1101 00:17:50.695268 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.695713 kubelet[2502]: E1101 00:17:50.695565 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.695713 kubelet[2502]: W1101 00:17:50.695574 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.695713 kubelet[2502]: E1101 00:17:50.695583 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.696168 kubelet[2502]: E1101 00:17:50.696004 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.696168 kubelet[2502]: W1101 00:17:50.696017 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.696168 kubelet[2502]: E1101 00:17:50.696123 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.696663 kubelet[2502]: E1101 00:17:50.696536 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.696663 kubelet[2502]: W1101 00:17:50.696547 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.696663 kubelet[2502]: E1101 00:17:50.696557 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.697496 kubelet[2502]: E1101 00:17:50.697337 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.697496 kubelet[2502]: W1101 00:17:50.697351 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.697496 kubelet[2502]: E1101 00:17:50.697362 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.698014 kubelet[2502]: E1101 00:17:50.697894 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.698014 kubelet[2502]: W1101 00:17:50.697924 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.698014 kubelet[2502]: E1101 00:17:50.697939 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.698821 kubelet[2502]: E1101 00:17:50.698439 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.698821 kubelet[2502]: W1101 00:17:50.698451 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.698821 kubelet[2502]: E1101 00:17:50.698462 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.699080 kubelet[2502]: E1101 00:17:50.699067 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.699327 kubelet[2502]: W1101 00:17:50.699214 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.699327 kubelet[2502]: E1101 00:17:50.699230 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.699804 kubelet[2502]: E1101 00:17:50.699714 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.699804 kubelet[2502]: W1101 00:17:50.699731 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.699804 kubelet[2502]: E1101 00:17:50.699744 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.700411 kubelet[2502]: E1101 00:17:50.700296 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.700411 kubelet[2502]: W1101 00:17:50.700309 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.700411 kubelet[2502]: E1101 00:17:50.700319 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.700753 kubelet[2502]: E1101 00:17:50.700724 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.701097 kubelet[2502]: W1101 00:17:50.700832 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.701097 kubelet[2502]: E1101 00:17:50.700848 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.701350 kubelet[2502]: E1101 00:17:50.701337 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.701457 kubelet[2502]: W1101 00:17:50.701399 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.701457 kubelet[2502]: E1101 00:17:50.701412 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.702064 kubelet[2502]: E1101 00:17:50.702052 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.702248 kubelet[2502]: W1101 00:17:50.702141 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.702248 kubelet[2502]: E1101 00:17:50.702156 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.702471 kubelet[2502]: E1101 00:17:50.702346 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.702471 kubelet[2502]: W1101 00:17:50.702353 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.702471 kubelet[2502]: E1101 00:17:50.702361 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.702741 kubelet[2502]: E1101 00:17:50.702692 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.702741 kubelet[2502]: W1101 00:17:50.702704 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.703109 kubelet[2502]: E1101 00:17:50.702831 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.703958 kubelet[2502]: E1101 00:17:50.703943 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.704156 kubelet[2502]: W1101 00:17:50.704025 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.704156 kubelet[2502]: E1101 00:17:50.704040 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.704364 kubelet[2502]: E1101 00:17:50.704294 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.704364 kubelet[2502]: W1101 00:17:50.704308 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.704364 kubelet[2502]: E1101 00:17:50.704318 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.705048 kubelet[2502]: E1101 00:17:50.704932 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.705048 kubelet[2502]: W1101 00:17:50.704943 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.705048 kubelet[2502]: E1101 00:17:50.704953 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.705848 kubelet[2502]: E1101 00:17:50.705660 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.705848 kubelet[2502]: W1101 00:17:50.705672 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.705848 kubelet[2502]: E1101 00:17:50.705682 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.706248 kubelet[2502]: E1101 00:17:50.706110 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.706248 kubelet[2502]: W1101 00:17:50.706120 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.706248 kubelet[2502]: E1101 00:17:50.706130 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.706690 kubelet[2502]: E1101 00:17:50.706435 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.706690 kubelet[2502]: W1101 00:17:50.706461 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.706690 kubelet[2502]: E1101 00:17:50.706477 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.707418 kubelet[2502]: E1101 00:17:50.707343 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:50.707418 kubelet[2502]: W1101 00:17:50.707358 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:50.707418 kubelet[2502]: E1101 00:17:50.707369 2502 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:50.767871 containerd[1471]: time="2025-11-01T00:17:50.767025731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:50.767871 containerd[1471]: time="2025-11-01T00:17:50.767809938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:17:50.768485 containerd[1471]: time="2025-11-01T00:17:50.768448111Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:50.770538 containerd[1471]: time="2025-11-01T00:17:50.770486818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:50.792606 containerd[1471]: time="2025-11-01T00:17:50.789303682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.33091704s" Nov 1 00:17:50.792606 containerd[1471]: time="2025-11-01T00:17:50.789357491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:17:50.798599 containerd[1471]: time="2025-11-01T00:17:50.798545073Z" level=info msg="CreateContainer within sandbox \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:17:50.814302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363911841.mount: Deactivated successfully. Nov 1 00:17:50.822450 containerd[1471]: time="2025-11-01T00:17:50.815052741Z" level=info msg="CreateContainer within sandbox \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2\"" Nov 1 00:17:50.826008 containerd[1471]: time="2025-11-01T00:17:50.823378896Z" level=info msg="StartContainer for \"010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2\"" Nov 1 00:17:50.891015 systemd[1]: Started cri-containerd-010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2.scope - libcontainer container 010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2. Nov 1 00:17:50.944685 containerd[1471]: time="2025-11-01T00:17:50.944167483Z" level=info msg="StartContainer for \"010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2\" returns successfully" Nov 1 00:17:50.958576 systemd[1]: cri-containerd-010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2.scope: Deactivated successfully. Nov 1 00:17:51.010937 containerd[1471]: time="2025-11-01T00:17:50.994866888Z" level=info msg="shim disconnected" id=010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2 namespace=k8s.io Nov 1 00:17:51.010937 containerd[1471]: time="2025-11-01T00:17:51.010925726Z" level=warning msg="cleaning up after shim disconnected" id=010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2 namespace=k8s.io Nov 1 00:17:51.010937 containerd[1471]: time="2025-11-01T00:17:51.010954922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:17:51.471528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-010b74615aebabb5fc8e73707da3dacf2e3b0b142b3fbdecfa499de6a5935cf2-rootfs.mount: Deactivated successfully. Nov 1 00:17:51.648109 kubelet[2502]: I1101 00:17:51.647921 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:17:51.650416 kubelet[2502]: E1101 00:17:51.648575 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:51.650416 kubelet[2502]: E1101 00:17:51.650302 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:51.651503 containerd[1471]: time="2025-11-01T00:17:51.650128272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:17:52.523459 kubelet[2502]: E1101 00:17:52.523326 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:53.308191 kubelet[2502]: I1101 00:17:53.307940 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:17:53.310755 kubelet[2502]: E1101 00:17:53.310209 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:53.653166 kubelet[2502]: E1101 00:17:53.653026 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:54.524325 kubelet[2502]: E1101 00:17:54.524120 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:55.548686 containerd[1471]: time="2025-11-01T00:17:55.548273930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:55.550274 containerd[1471]: time="2025-11-01T00:17:55.549978574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:17:55.551662 containerd[1471]: time="2025-11-01T00:17:55.550800928Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:55.553545 containerd[1471]: time="2025-11-01T00:17:55.553513903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:55.555050 containerd[1471]: time="2025-11-01T00:17:55.555021651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.904854165s" Nov 1 00:17:55.555161 containerd[1471]: time="2025-11-01T00:17:55.555146751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:17:55.559974 containerd[1471]: time="2025-11-01T00:17:55.559936394Z" level=info msg="CreateContainer within sandbox \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:17:55.575476 containerd[1471]: time="2025-11-01T00:17:55.575430291Z" level=info msg="CreateContainer within sandbox \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504\"" Nov 1 00:17:55.576264 containerd[1471]: time="2025-11-01T00:17:55.576241293Z" level=info msg="StartContainer for \"0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504\"" Nov 1 00:17:55.633443 systemd[1]: run-containerd-runc-k8s.io-0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504-runc.gTMKJG.mount: Deactivated successfully. Nov 1 00:17:55.646950 systemd[1]: Started cri-containerd-0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504.scope - libcontainer container 0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504. Nov 1 00:17:55.721693 containerd[1471]: time="2025-11-01T00:17:55.721595342Z" level=info msg="StartContainer for \"0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504\" returns successfully" Nov 1 00:17:56.352426 systemd[1]: cri-containerd-0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504.scope: Deactivated successfully. Nov 1 00:17:56.388172 kubelet[2502]: I1101 00:17:56.388035 2502 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:17:56.390038 containerd[1471]: time="2025-11-01T00:17:56.389905689Z" level=info msg="shim disconnected" id=0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504 namespace=k8s.io Nov 1 00:17:56.390978 containerd[1471]: time="2025-11-01T00:17:56.390863088Z" level=warning msg="cleaning up after shim disconnected" id=0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504 namespace=k8s.io Nov 1 00:17:56.390978 containerd[1471]: time="2025-11-01T00:17:56.390892076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:17:56.425442 containerd[1471]: time="2025-11-01T00:17:56.425371753Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:17:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:17:56.459591 systemd[1]: Created slice kubepods-besteffort-pod385266d7_6e64_4f3b_97e7_b399fc11fb3c.slice - libcontainer container kubepods-besteffort-pod385266d7_6e64_4f3b_97e7_b399fc11fb3c.slice. Nov 1 00:17:56.478444 systemd[1]: Created slice kubepods-besteffort-pod88514d97_6a8a_4349_b2ae_0a411d3ab2a9.slice - libcontainer container kubepods-besteffort-pod88514d97_6a8a_4349_b2ae_0a411d3ab2a9.slice. Nov 1 00:17:56.494875 systemd[1]: Created slice kubepods-burstable-pod097de2b5_b860_413b_9296_b00cb2127d6e.slice - libcontainer container kubepods-burstable-pod097de2b5_b860_413b_9296_b00cb2127d6e.slice. Nov 1 00:17:56.507732 systemd[1]: Created slice kubepods-besteffort-pod158163d5_4372_43a9_8b56_d89943f06f09.slice - libcontainer container kubepods-besteffort-pod158163d5_4372_43a9_8b56_d89943f06f09.slice. Nov 1 00:17:56.519526 systemd[1]: Created slice kubepods-besteffort-podb7c3b2ca_b43d_49de_8f54_e296a887af33.slice - libcontainer container kubepods-besteffort-podb7c3b2ca_b43d_49de_8f54_e296a887af33.slice. Nov 1 00:17:56.527057 systemd[1]: Created slice kubepods-besteffort-pod221e198b_355f_49c3_b36d_9c4176619bae.slice - libcontainer container kubepods-besteffort-pod221e198b_355f_49c3_b36d_9c4176619bae.slice. Nov 1 00:17:56.536150 systemd[1]: Created slice kubepods-burstable-podb8210a7f_2ccf_40c6_8962_23acfca85626.slice - libcontainer container kubepods-burstable-podb8210a7f_2ccf_40c6_8962_23acfca85626.slice. Nov 1 00:17:56.545133 systemd[1]: Created slice kubepods-besteffort-podfcdc505d_5cce_492c_9f5d_b001efaf66ff.slice - libcontainer container kubepods-besteffort-podfcdc505d_5cce_492c_9f5d_b001efaf66ff.slice. Nov 1 00:17:56.549590 kubelet[2502]: I1101 00:17:56.549535 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6672\" (UniqueName: \"kubernetes.io/projected/88514d97-6a8a-4349-b2ae-0a411d3ab2a9-kube-api-access-p6672\") pod \"calico-kube-controllers-fc444b969-zmtxl\" (UID: \"88514d97-6a8a-4349-b2ae-0a411d3ab2a9\") " pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" Nov 1 00:17:56.549590 kubelet[2502]: I1101 00:17:56.549579 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vl4\" (UniqueName: \"kubernetes.io/projected/158163d5-4372-43a9-8b56-d89943f06f09-kube-api-access-89vl4\") pod \"calico-apiserver-849f5b77d5-72jqw\" (UID: \"158163d5-4372-43a9-8b56-d89943f06f09\") " pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" Nov 1 00:17:56.549590 kubelet[2502]: I1101 00:17:56.549599 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79z4l\" (UniqueName: \"kubernetes.io/projected/097de2b5-b860-413b-9296-b00cb2127d6e-kube-api-access-79z4l\") pod \"coredns-66bc5c9577-dvmpd\" (UID: \"097de2b5-b860-413b-9296-b00cb2127d6e\") " pod="kube-system/coredns-66bc5c9577-dvmpd" Nov 1 00:17:56.549893 kubelet[2502]: I1101 00:17:56.549617 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c3b2ca-b43d-49de-8f54-e296a887af33-config\") pod \"goldmane-7c778bb748-f2w9v\" (UID: \"b7c3b2ca-b43d-49de-8f54-e296a887af33\") " pod="calico-system/goldmane-7c778bb748-f2w9v" Nov 1 00:17:56.549893 kubelet[2502]: I1101 00:17:56.549670 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8210a7f-2ccf-40c6-8962-23acfca85626-config-volume\") pod \"coredns-66bc5c9577-xzrmj\" (UID: \"b8210a7f-2ccf-40c6-8962-23acfca85626\") " pod="kube-system/coredns-66bc5c9577-xzrmj" Nov 1 00:17:56.549893 kubelet[2502]: I1101 00:17:56.549686 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/385266d7-6e64-4f3b-97e7-b399fc11fb3c-calico-apiserver-certs\") pod \"calico-apiserver-849f5b77d5-xs24n\" (UID: \"385266d7-6e64-4f3b-97e7-b399fc11fb3c\") " pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" Nov 1 00:17:56.549893 kubelet[2502]: I1101 00:17:56.549701 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/158163d5-4372-43a9-8b56-d89943f06f09-calico-apiserver-certs\") pod \"calico-apiserver-849f5b77d5-72jqw\" (UID: \"158163d5-4372-43a9-8b56-d89943f06f09\") " pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" Nov 1 00:17:56.549893 kubelet[2502]: I1101 00:17:56.549717 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9mx\" (UniqueName: \"kubernetes.io/projected/b8210a7f-2ccf-40c6-8962-23acfca85626-kube-api-access-cj9mx\") pod \"coredns-66bc5c9577-xzrmj\" (UID: \"b8210a7f-2ccf-40c6-8962-23acfca85626\") " pod="kube-system/coredns-66bc5c9577-xzrmj" Nov 1 00:17:56.550059 kubelet[2502]: I1101 00:17:56.549734 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/221e198b-355f-49c3-b36d-9c4176619bae-whisker-backend-key-pair\") pod \"whisker-9d8cbc64d-vpv5g\" (UID: \"221e198b-355f-49c3-b36d-9c4176619bae\") " pod="calico-system/whisker-9d8cbc64d-vpv5g" Nov 1 00:17:56.550059 kubelet[2502]: I1101 00:17:56.549748 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221e198b-355f-49c3-b36d-9c4176619bae-whisker-ca-bundle\") pod \"whisker-9d8cbc64d-vpv5g\" (UID: \"221e198b-355f-49c3-b36d-9c4176619bae\") " pod="calico-system/whisker-9d8cbc64d-vpv5g" Nov 1 00:17:56.550059 kubelet[2502]: I1101 00:17:56.549762 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/097de2b5-b860-413b-9296-b00cb2127d6e-config-volume\") pod \"coredns-66bc5c9577-dvmpd\" (UID: \"097de2b5-b860-413b-9296-b00cb2127d6e\") " pod="kube-system/coredns-66bc5c9577-dvmpd" Nov 1 00:17:56.550059 kubelet[2502]: I1101 00:17:56.549778 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88514d97-6a8a-4349-b2ae-0a411d3ab2a9-tigera-ca-bundle\") pod \"calico-kube-controllers-fc444b969-zmtxl\" (UID: \"88514d97-6a8a-4349-b2ae-0a411d3ab2a9\") " pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" Nov 1 00:17:56.550059 kubelet[2502]: I1101 00:17:56.549794 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7c3b2ca-b43d-49de-8f54-e296a887af33-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-f2w9v\" (UID: \"b7c3b2ca-b43d-49de-8f54-e296a887af33\") " pod="calico-system/goldmane-7c778bb748-f2w9v" Nov 1 00:17:56.550179 kubelet[2502]: I1101 00:17:56.549808 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b7c3b2ca-b43d-49de-8f54-e296a887af33-goldmane-key-pair\") pod \"goldmane-7c778bb748-f2w9v\" (UID: \"b7c3b2ca-b43d-49de-8f54-e296a887af33\") " pod="calico-system/goldmane-7c778bb748-f2w9v" Nov 1 00:17:56.550179 kubelet[2502]: I1101 00:17:56.549827 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcpnt\" (UniqueName: \"kubernetes.io/projected/385266d7-6e64-4f3b-97e7-b399fc11fb3c-kube-api-access-fcpnt\") pod \"calico-apiserver-849f5b77d5-xs24n\" (UID: \"385266d7-6e64-4f3b-97e7-b399fc11fb3c\") " pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" Nov 1 00:17:56.550179 kubelet[2502]: I1101 00:17:56.549840 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5sv\" (UniqueName: \"kubernetes.io/projected/221e198b-355f-49c3-b36d-9c4176619bae-kube-api-access-kd5sv\") pod \"whisker-9d8cbc64d-vpv5g\" (UID: \"221e198b-355f-49c3-b36d-9c4176619bae\") " pod="calico-system/whisker-9d8cbc64d-vpv5g" Nov 1 00:17:56.550179 kubelet[2502]: I1101 00:17:56.549859 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqcc\" (UniqueName: \"kubernetes.io/projected/b7c3b2ca-b43d-49de-8f54-e296a887af33-kube-api-access-4fqcc\") pod \"goldmane-7c778bb748-f2w9v\" (UID: \"b7c3b2ca-b43d-49de-8f54-e296a887af33\") " pod="calico-system/goldmane-7c778bb748-f2w9v" Nov 1 00:17:56.553148 containerd[1471]: time="2025-11-01T00:17:56.552730727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntlzm,Uid:fcdc505d-5cce-492c-9f5d-b001efaf66ff,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:56.574143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0be7926c41ac0127fbe84f1e90fc845490ba368d17f1ccd3d695cc33baf25504-rootfs.mount: Deactivated successfully. Nov 1 00:17:56.789023 containerd[1471]: time="2025-11-01T00:17:56.788957487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-xs24n,Uid:385266d7-6e64-4f3b-97e7-b399fc11fb3c,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:17:56.792324 kubelet[2502]: E1101 00:17:56.792293 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:56.796809 containerd[1471]: time="2025-11-01T00:17:56.795960063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:17:56.809533 kubelet[2502]: E1101 00:17:56.808713 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:56.809728 containerd[1471]: time="2025-11-01T00:17:56.809499247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvmpd,Uid:097de2b5-b860-413b-9296-b00cb2127d6e,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:56.821737 containerd[1471]: time="2025-11-01T00:17:56.821686109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-72jqw,Uid:158163d5-4372-43a9-8b56-d89943f06f09,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:17:56.827933 containerd[1471]: time="2025-11-01T00:17:56.827887194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f2w9v,Uid:b7c3b2ca-b43d-49de-8f54-e296a887af33,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:56.837276 containerd[1471]: time="2025-11-01T00:17:56.837229836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9d8cbc64d-vpv5g,Uid:221e198b-355f-49c3-b36d-9c4176619bae,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:56.851061 kubelet[2502]: E1101 00:17:56.851003 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:17:56.878680 containerd[1471]: time="2025-11-01T00:17:56.877608920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzrmj,Uid:b8210a7f-2ccf-40c6-8962-23acfca85626,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:57.011022 containerd[1471]: time="2025-11-01T00:17:57.010483164Z" level=error msg="Failed to destroy network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.022678 containerd[1471]: time="2025-11-01T00:17:57.021665170Z" level=error msg="encountered an error cleaning up failed sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.022678 containerd[1471]: time="2025-11-01T00:17:57.021758390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntlzm,Uid:fcdc505d-5cce-492c-9f5d-b001efaf66ff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.030304 kubelet[2502]: E1101 00:17:57.030242 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.030552 kubelet[2502]: E1101 00:17:57.030334 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:57.030552 kubelet[2502]: E1101 00:17:57.030362 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ntlzm" Nov 1 00:17:57.033745 kubelet[2502]: E1101 00:17:57.033674 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:57.057830 containerd[1471]: time="2025-11-01T00:17:57.057238043Z" level=error msg="Failed to destroy network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.060361 containerd[1471]: time="2025-11-01T00:17:57.060316158Z" level=error msg="encountered an error cleaning up failed sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.061033 containerd[1471]: time="2025-11-01T00:17:57.060536495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-72jqw,Uid:158163d5-4372-43a9-8b56-d89943f06f09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.062090 kubelet[2502]: E1101 00:17:57.061836 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.062090 kubelet[2502]: E1101 00:17:57.061893 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" Nov 1 00:17:57.062090 kubelet[2502]: E1101 00:17:57.061914 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" Nov 1 00:17:57.062233 kubelet[2502]: E1101 00:17:57.061981 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849f5b77d5-72jqw_calico-apiserver(158163d5-4372-43a9-8b56-d89943f06f09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849f5b77d5-72jqw_calico-apiserver(158163d5-4372-43a9-8b56-d89943f06f09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:17:57.082914 containerd[1471]: time="2025-11-01T00:17:57.082806071Z" level=error msg="Failed to destroy network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.083302 containerd[1471]: time="2025-11-01T00:17:57.083152322Z" level=error msg="encountered an error cleaning up failed sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.083302 containerd[1471]: time="2025-11-01T00:17:57.083207663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-xs24n,Uid:385266d7-6e64-4f3b-97e7-b399fc11fb3c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.083692 kubelet[2502]: E1101 00:17:57.083555 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.083692 kubelet[2502]: E1101 00:17:57.083671 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" Nov 1 00:17:57.083782 kubelet[2502]: E1101 00:17:57.083699 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" Nov 1 00:17:57.083818 kubelet[2502]: E1101 00:17:57.083779 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849f5b77d5-xs24n_calico-apiserver(385266d7-6e64-4f3b-97e7-b399fc11fb3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849f5b77d5-xs24n_calico-apiserver(385266d7-6e64-4f3b-97e7-b399fc11fb3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:17:57.092098 containerd[1471]: time="2025-11-01T00:17:57.091813699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fc444b969-zmtxl,Uid:88514d97-6a8a-4349-b2ae-0a411d3ab2a9,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:57.132926 containerd[1471]: time="2025-11-01T00:17:57.132809781Z" level=error msg="Failed to destroy network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.137077 containerd[1471]: time="2025-11-01T00:17:57.136852553Z" level=error msg="encountered an error cleaning up failed sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.138703 containerd[1471]: time="2025-11-01T00:17:57.136978299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvmpd,Uid:097de2b5-b860-413b-9296-b00cb2127d6e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.138942 kubelet[2502]: E1101 00:17:57.138904 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.139173 kubelet[2502]: E1101 00:17:57.138973 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dvmpd" Nov 1 00:17:57.139173 kubelet[2502]: E1101 00:17:57.139025 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dvmpd" Nov 1 00:17:57.139173 kubelet[2502]: E1101 00:17:57.139143 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dvmpd_kube-system(097de2b5-b860-413b-9296-b00cb2127d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dvmpd_kube-system(097de2b5-b860-413b-9296-b00cb2127d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dvmpd" podUID="097de2b5-b860-413b-9296-b00cb2127d6e" Nov 1 00:17:57.153688 containerd[1471]: time="2025-11-01T00:17:57.152606344Z" level=error msg="Failed to destroy network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.154445 containerd[1471]: time="2025-11-01T00:17:57.154249514Z" level=error msg="encountered an error cleaning up failed sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.154585 containerd[1471]: time="2025-11-01T00:17:57.154374961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9d8cbc64d-vpv5g,Uid:221e198b-355f-49c3-b36d-9c4176619bae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.157702 kubelet[2502]: E1101 00:17:57.157489 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.157702 kubelet[2502]: E1101 00:17:57.157550 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9d8cbc64d-vpv5g" Nov 1 00:17:57.157702 kubelet[2502]: E1101 00:17:57.157571 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9d8cbc64d-vpv5g" Nov 1 00:17:57.158678 kubelet[2502]: E1101 00:17:57.157990 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9d8cbc64d-vpv5g_calico-system(221e198b-355f-49c3-b36d-9c4176619bae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9d8cbc64d-vpv5g_calico-system(221e198b-355f-49c3-b36d-9c4176619bae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9d8cbc64d-vpv5g" podUID="221e198b-355f-49c3-b36d-9c4176619bae" Nov 1 00:17:57.183725 containerd[1471]: time="2025-11-01T00:17:57.183661778Z" level=error msg="Failed to destroy network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.192188 containerd[1471]: time="2025-11-01T00:17:57.192121984Z" level=error msg="encountered an error cleaning up failed sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.192708 containerd[1471]: time="2025-11-01T00:17:57.192677398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f2w9v,Uid:b7c3b2ca-b43d-49de-8f54-e296a887af33,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.193324 kubelet[2502]: E1101 00:17:57.193279 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.193475 kubelet[2502]: E1101 00:17:57.193348 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f2w9v" Nov 1 00:17:57.193475 kubelet[2502]: E1101 00:17:57.193374 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f2w9v" Nov 1 00:17:57.193475 kubelet[2502]: E1101 00:17:57.193439 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-f2w9v_calico-system(b7c3b2ca-b43d-49de-8f54-e296a887af33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-f2w9v_calico-system(b7c3b2ca-b43d-49de-8f54-e296a887af33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:17:57.215674 containerd[1471]: time="2025-11-01T00:17:57.215593073Z" level=error msg="Failed to destroy network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.216962 containerd[1471]: time="2025-11-01T00:17:57.216095091Z" level=error msg="encountered an error cleaning up failed sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.216962 containerd[1471]: time="2025-11-01T00:17:57.216174181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzrmj,Uid:b8210a7f-2ccf-40c6-8962-23acfca85626,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.217109 kubelet[2502]: E1101 00:17:57.216506 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.217109 kubelet[2502]: E1101 00:17:57.216568 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzrmj" Nov 1 00:17:57.217109 kubelet[2502]: E1101 00:17:57.216592 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzrmj" Nov 1 00:17:57.217227 kubelet[2502]: E1101 00:17:57.216676 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xzrmj_kube-system(b8210a7f-2ccf-40c6-8962-23acfca85626)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xzrmj_kube-system(b8210a7f-2ccf-40c6-8962-23acfca85626)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xzrmj" podUID="b8210a7f-2ccf-40c6-8962-23acfca85626" Nov 1 00:17:57.241661 containerd[1471]: time="2025-11-01T00:17:57.240914808Z" level=error msg="Failed to destroy network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.241661 containerd[1471]: time="2025-11-01T00:17:57.241254787Z" level=error msg="encountered an error cleaning up failed sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.241661 containerd[1471]: time="2025-11-01T00:17:57.241321943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fc444b969-zmtxl,Uid:88514d97-6a8a-4349-b2ae-0a411d3ab2a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.241957 kubelet[2502]: E1101 00:17:57.241552 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.241957 kubelet[2502]: E1101 00:17:57.241608 2502 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" Nov 1 00:17:57.241957 kubelet[2502]: E1101 00:17:57.241646 2502 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" Nov 1 00:17:57.242056 kubelet[2502]: E1101 00:17:57.241715 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fc444b969-zmtxl_calico-system(88514d97-6a8a-4349-b2ae-0a411d3ab2a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fc444b969-zmtxl_calico-system(88514d97-6a8a-4349-b2ae-0a411d3ab2a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:17:57.583871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa-shm.mount: Deactivated successfully. Nov 1 00:17:57.796118 kubelet[2502]: I1101 00:17:57.794808 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:17:57.798057 kubelet[2502]: I1101 00:17:57.797539 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:17:57.804039 containerd[1471]: time="2025-11-01T00:17:57.803429115Z" level=info msg="StopPodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\"" Nov 1 00:17:57.804511 containerd[1471]: time="2025-11-01T00:17:57.804259978Z" level=info msg="StopPodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\"" Nov 1 00:17:57.808745 containerd[1471]: time="2025-11-01T00:17:57.806001816Z" level=info msg="Ensure that sandbox 112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327 in task-service has been cleanup successfully" Nov 1 00:17:57.808745 containerd[1471]: time="2025-11-01T00:17:57.806272440Z" level=info msg="Ensure that sandbox 70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97 in task-service has been cleanup successfully" Nov 1 00:17:57.811291 kubelet[2502]: I1101 00:17:57.811259 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:17:57.813050 containerd[1471]: time="2025-11-01T00:17:57.813020507Z" level=info msg="StopPodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\"" Nov 1 00:17:57.815015 containerd[1471]: time="2025-11-01T00:17:57.814931664Z" level=info msg="Ensure that sandbox 781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa in task-service has been cleanup successfully" Nov 1 00:17:57.817219 kubelet[2502]: I1101 00:17:57.817190 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:17:57.819671 containerd[1471]: time="2025-11-01T00:17:57.819609358Z" level=info msg="StopPodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\"" Nov 1 00:17:57.819898 containerd[1471]: time="2025-11-01T00:17:57.819872445Z" level=info msg="Ensure that sandbox 6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2 in task-service has been cleanup successfully" Nov 1 00:17:57.825945 kubelet[2502]: I1101 00:17:57.825910 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:17:57.827850 containerd[1471]: time="2025-11-01T00:17:57.827812883Z" level=info msg="StopPodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\"" Nov 1 00:17:57.828011 containerd[1471]: time="2025-11-01T00:17:57.827992002Z" level=info msg="Ensure that sandbox e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d in task-service has been cleanup successfully" Nov 1 00:17:57.835931 kubelet[2502]: I1101 00:17:57.835818 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:17:57.838830 containerd[1471]: time="2025-11-01T00:17:57.838493275Z" level=info msg="StopPodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\"" Nov 1 00:17:57.840197 kubelet[2502]: I1101 00:17:57.839713 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:17:57.841077 containerd[1471]: time="2025-11-01T00:17:57.840982818Z" level=info msg="StopPodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\"" Nov 1 00:17:57.841227 containerd[1471]: time="2025-11-01T00:17:57.841146708Z" level=info msg="Ensure that sandbox 19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64 in task-service has been cleanup successfully" Nov 1 00:17:57.842908 containerd[1471]: time="2025-11-01T00:17:57.842875076Z" level=info msg="Ensure that sandbox 8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32 in task-service has been cleanup successfully" Nov 1 00:17:57.851790 kubelet[2502]: I1101 00:17:57.851654 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:17:57.853412 containerd[1471]: time="2025-11-01T00:17:57.853283878Z" level=info msg="StopPodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\"" Nov 1 00:17:57.858661 containerd[1471]: time="2025-11-01T00:17:57.857382669Z" level=info msg="Ensure that sandbox 46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0 in task-service has been cleanup successfully" Nov 1 00:17:57.944741 containerd[1471]: time="2025-11-01T00:17:57.944679491Z" level=error msg="StopPodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" failed" error="failed to destroy network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.944991 kubelet[2502]: E1101 00:17:57.944951 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:17:57.945075 kubelet[2502]: E1101 00:17:57.945007 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97"} Nov 1 00:17:57.945075 kubelet[2502]: E1101 00:17:57.945068 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7c3b2ca-b43d-49de-8f54-e296a887af33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:57.945168 kubelet[2502]: E1101 00:17:57.945097 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7c3b2ca-b43d-49de-8f54-e296a887af33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:17:57.970736 containerd[1471]: time="2025-11-01T00:17:57.970670852Z" level=error msg="StopPodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" failed" error="failed to destroy network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.971172 kubelet[2502]: E1101 00:17:57.971133 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:17:57.971249 kubelet[2502]: E1101 00:17:57.971186 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa"} Nov 1 00:17:57.971278 kubelet[2502]: E1101 00:17:57.971259 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:57.971347 kubelet[2502]: E1101 00:17:57.971290 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcdc505d-5cce-492c-9f5d-b001efaf66ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:17:57.980752 containerd[1471]: time="2025-11-01T00:17:57.980645381Z" level=error msg="StopPodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" failed" error="failed to destroy network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:57.981054 kubelet[2502]: E1101 00:17:57.981012 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:17:57.981124 kubelet[2502]: E1101 00:17:57.981087 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2"} Nov 1 00:17:57.981152 kubelet[2502]: E1101 00:17:57.981123 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8210a7f-2ccf-40c6-8962-23acfca85626\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:57.981224 kubelet[2502]: E1101 00:17:57.981169 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8210a7f-2ccf-40c6-8962-23acfca85626\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xzrmj" podUID="b8210a7f-2ccf-40c6-8962-23acfca85626" Nov 1 00:17:58.002895 containerd[1471]: time="2025-11-01T00:17:58.002751410Z" level=error msg="StopPodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" failed" error="failed to destroy network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:58.003703 kubelet[2502]: E1101 00:17:58.003616 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:17:58.003792 kubelet[2502]: E1101 00:17:58.003733 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327"} Nov 1 00:17:58.003792 kubelet[2502]: E1101 00:17:58.003766 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"097de2b5-b860-413b-9296-b00cb2127d6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:58.003891 kubelet[2502]: E1101 00:17:58.003823 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"097de2b5-b860-413b-9296-b00cb2127d6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dvmpd" podUID="097de2b5-b860-413b-9296-b00cb2127d6e" Nov 1 00:17:58.007714 containerd[1471]: time="2025-11-01T00:17:58.007240846Z" level=error msg="StopPodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" failed" error="failed to destroy network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:58.007846 kubelet[2502]: E1101 00:17:58.007516 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:17:58.007846 kubelet[2502]: E1101 00:17:58.007563 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64"} Nov 1 00:17:58.007846 kubelet[2502]: E1101 00:17:58.007593 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88514d97-6a8a-4349-b2ae-0a411d3ab2a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:58.007846 kubelet[2502]: E1101 00:17:58.007622 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88514d97-6a8a-4349-b2ae-0a411d3ab2a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:17:58.031271 containerd[1471]: time="2025-11-01T00:17:58.029713384Z" level=error msg="StopPodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" failed" error="failed to destroy network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:58.031417 kubelet[2502]: E1101 00:17:58.031084 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:17:58.031417 kubelet[2502]: E1101 00:17:58.031151 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d"} Nov 1 00:17:58.031417 kubelet[2502]: E1101 00:17:58.031195 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"221e198b-355f-49c3-b36d-9c4176619bae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:58.031417 kubelet[2502]: E1101 00:17:58.031223 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"221e198b-355f-49c3-b36d-9c4176619bae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9d8cbc64d-vpv5g" podUID="221e198b-355f-49c3-b36d-9c4176619bae" Nov 1 00:17:58.033028 containerd[1471]: time="2025-11-01T00:17:58.032975534Z" level=error msg="StopPodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" failed" error="failed to destroy network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:58.033744 kubelet[2502]: E1101 00:17:58.033424 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:17:58.033744 kubelet[2502]: E1101 00:17:58.033508 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32"} Nov 1 00:17:58.033744 kubelet[2502]: E1101 00:17:58.033571 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"385266d7-6e64-4f3b-97e7-b399fc11fb3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:58.033744 kubelet[2502]: E1101 00:17:58.033622 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"385266d7-6e64-4f3b-97e7-b399fc11fb3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:17:58.035072 containerd[1471]: time="2025-11-01T00:17:58.035036866Z" level=error msg="StopPodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" failed" error="failed to destroy network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:58.036041 kubelet[2502]: E1101 00:17:58.035886 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:17:58.036041 kubelet[2502]: E1101 00:17:58.035931 2502 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0"} Nov 1 00:17:58.036041 kubelet[2502]: E1101 00:17:58.035959 2502 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"158163d5-4372-43a9-8b56-d89943f06f09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:58.036041 kubelet[2502]: E1101 00:17:58.035988 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"158163d5-4372-43a9-8b56-d89943f06f09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:01.904746 systemd[1]: Started sshd@8-146.190.126.63:22-134.199.207.61:56830.service - OpenSSH per-connection server daemon (134.199.207.61:56830). Nov 1 00:18:02.215225 sshd[3668]: Invalid user from 134.199.207.61 port 56830 Nov 1 00:18:03.208748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194593160.mount: Deactivated successfully. Nov 1 00:18:03.311184 containerd[1471]: time="2025-11-01T00:18:03.299896925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:18:03.323109 containerd[1471]: time="2025-11-01T00:18:03.322572552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:18:03.334670 containerd[1471]: time="2025-11-01T00:18:03.334587170Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:18:03.337025 containerd[1471]: time="2025-11-01T00:18:03.336923674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:18:03.338041 containerd[1471]: time="2025-11-01T00:18:03.337604862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.541590689s" Nov 1 00:18:03.338041 containerd[1471]: time="2025-11-01T00:18:03.337666909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:18:03.399228 containerd[1471]: time="2025-11-01T00:18:03.399169405Z" level=info msg="CreateContainer within sandbox \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:18:03.448159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501025619.mount: Deactivated successfully. Nov 1 00:18:03.464092 containerd[1471]: time="2025-11-01T00:18:03.463772831Z" level=info msg="CreateContainer within sandbox \"f92348f31de328783a90d58f6b92032e32ac3b94a703802a37c7d4b99a9ff2a4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"81a257d40748003c514abcdad1d29738b314437a0cde5739df3435f513df8ccf\"" Nov 1 00:18:03.467274 containerd[1471]: time="2025-11-01T00:18:03.467204938Z" level=info msg="StartContainer for \"81a257d40748003c514abcdad1d29738b314437a0cde5739df3435f513df8ccf\"" Nov 1 00:18:03.598114 systemd[1]: Started cri-containerd-81a257d40748003c514abcdad1d29738b314437a0cde5739df3435f513df8ccf.scope - libcontainer container 81a257d40748003c514abcdad1d29738b314437a0cde5739df3435f513df8ccf. Nov 1 00:18:03.688804 containerd[1471]: time="2025-11-01T00:18:03.687371871Z" level=info msg="StartContainer for \"81a257d40748003c514abcdad1d29738b314437a0cde5739df3435f513df8ccf\" returns successfully" Nov 1 00:18:03.866503 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:18:03.867412 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:18:03.955054 kubelet[2502]: E1101 00:18:03.955009 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:04.048287 kubelet[2502]: I1101 00:18:04.018402 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tv8zb" podStartSLOduration=1.75838362 podStartE2EDuration="18.012763508s" podCreationTimestamp="2025-11-01 00:17:46 +0000 UTC" firstStartedPulling="2025-11-01 00:17:47.090678519 +0000 UTC m=+21.754305521" lastFinishedPulling="2025-11-01 00:18:03.345058407 +0000 UTC m=+38.008685409" observedRunningTime="2025-11-01 00:18:03.997194063 +0000 UTC m=+38.660821096" watchObservedRunningTime="2025-11-01 00:18:04.012763508 +0000 UTC m=+38.676390525" Nov 1 00:18:04.121875 containerd[1471]: time="2025-11-01T00:18:04.121746237Z" level=info msg="StopPodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\"" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.242 [INFO][3733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.243 [INFO][3733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" iface="eth0" netns="/var/run/netns/cni-e385e4a2-ad45-939c-244d-bc6b1c860ea2" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.244 [INFO][3733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" iface="eth0" netns="/var/run/netns/cni-e385e4a2-ad45-939c-244d-bc6b1c860ea2" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.244 [INFO][3733] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" iface="eth0" netns="/var/run/netns/cni-e385e4a2-ad45-939c-244d-bc6b1c860ea2" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.244 [INFO][3733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.244 [INFO][3733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.348 [INFO][3742] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.349 [INFO][3742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.350 [INFO][3742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.364 [WARNING][3742] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.364 [INFO][3742] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.367 [INFO][3742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:04.372802 containerd[1471]: 2025-11-01 00:18:04.369 [INFO][3733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:04.377107 containerd[1471]: time="2025-11-01T00:18:04.376582866Z" level=info msg="TearDown network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" successfully" Nov 1 00:18:04.377219 containerd[1471]: time="2025-11-01T00:18:04.377117025Z" level=info msg="StopPodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" returns successfully" Nov 1 00:18:04.381567 systemd[1]: run-netns-cni\x2de385e4a2\x2dad45\x2d939c\x2d244d\x2dbc6b1c860ea2.mount: Deactivated successfully. Nov 1 00:18:04.522280 kubelet[2502]: I1101 00:18:04.522218 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/221e198b-355f-49c3-b36d-9c4176619bae-whisker-backend-key-pair\") pod \"221e198b-355f-49c3-b36d-9c4176619bae\" (UID: \"221e198b-355f-49c3-b36d-9c4176619bae\") " Nov 1 00:18:04.522893 kubelet[2502]: I1101 00:18:04.522549 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221e198b-355f-49c3-b36d-9c4176619bae-whisker-ca-bundle\") pod \"221e198b-355f-49c3-b36d-9c4176619bae\" (UID: \"221e198b-355f-49c3-b36d-9c4176619bae\") " Nov 1 00:18:04.522893 kubelet[2502]: I1101 00:18:04.522596 2502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5sv\" (UniqueName: \"kubernetes.io/projected/221e198b-355f-49c3-b36d-9c4176619bae-kube-api-access-kd5sv\") pod \"221e198b-355f-49c3-b36d-9c4176619bae\" (UID: \"221e198b-355f-49c3-b36d-9c4176619bae\") " Nov 1 00:18:04.530505 systemd[1]: var-lib-kubelet-pods-221e198b\x2d355f\x2d49c3\x2db36d\x2d9c4176619bae-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:18:04.537489 systemd[1]: var-lib-kubelet-pods-221e198b\x2d355f\x2d49c3\x2db36d\x2d9c4176619bae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkd5sv.mount: Deactivated successfully. Nov 1 00:18:04.537825 kubelet[2502]: I1101 00:18:04.534841 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/221e198b-355f-49c3-b36d-9c4176619bae-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "221e198b-355f-49c3-b36d-9c4176619bae" (UID: "221e198b-355f-49c3-b36d-9c4176619bae"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:18:04.538025 kubelet[2502]: I1101 00:18:04.537999 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221e198b-355f-49c3-b36d-9c4176619bae-kube-api-access-kd5sv" (OuterVolumeSpecName: "kube-api-access-kd5sv") pod "221e198b-355f-49c3-b36d-9c4176619bae" (UID: "221e198b-355f-49c3-b36d-9c4176619bae"). InnerVolumeSpecName "kube-api-access-kd5sv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:18:04.539733 kubelet[2502]: I1101 00:18:04.534749 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221e198b-355f-49c3-b36d-9c4176619bae-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "221e198b-355f-49c3-b36d-9c4176619bae" (UID: "221e198b-355f-49c3-b36d-9c4176619bae"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:18:04.624058 kubelet[2502]: I1101 00:18:04.623869 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kd5sv\" (UniqueName: \"kubernetes.io/projected/221e198b-355f-49c3-b36d-9c4176619bae-kube-api-access-kd5sv\") on node \"ci-4081.3.6-n-62dab69cc5\" DevicePath \"\"" Nov 1 00:18:04.624058 kubelet[2502]: I1101 00:18:04.623916 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/221e198b-355f-49c3-b36d-9c4176619bae-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-62dab69cc5\" DevicePath \"\"" Nov 1 00:18:04.624058 kubelet[2502]: I1101 00:18:04.623930 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/221e198b-355f-49c3-b36d-9c4176619bae-whisker-ca-bundle\") on node \"ci-4081.3.6-n-62dab69cc5\" DevicePath \"\"" Nov 1 00:18:04.946735 kubelet[2502]: I1101 00:18:04.945738 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:18:04.946735 kubelet[2502]: E1101 00:18:04.946193 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:04.950965 systemd[1]: Removed slice kubepods-besteffort-pod221e198b_355f_49c3_b36d_9c4176619bae.slice - libcontainer container kubepods-besteffort-pod221e198b_355f_49c3_b36d_9c4176619bae.slice. Nov 1 00:18:05.073238 systemd[1]: Created slice kubepods-besteffort-pod927c6d85_0eb1_4656_97ff_085d71b01e8e.slice - libcontainer container kubepods-besteffort-pod927c6d85_0eb1_4656_97ff_085d71b01e8e.slice. Nov 1 00:18:05.127038 kubelet[2502]: I1101 00:18:05.126881 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/927c6d85-0eb1-4656-97ff-085d71b01e8e-whisker-ca-bundle\") pod \"whisker-856547cfbb-zlkzz\" (UID: \"927c6d85-0eb1-4656-97ff-085d71b01e8e\") " pod="calico-system/whisker-856547cfbb-zlkzz" Nov 1 00:18:05.127038 kubelet[2502]: I1101 00:18:05.126942 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzwh\" (UniqueName: \"kubernetes.io/projected/927c6d85-0eb1-4656-97ff-085d71b01e8e-kube-api-access-jlzwh\") pod \"whisker-856547cfbb-zlkzz\" (UID: \"927c6d85-0eb1-4656-97ff-085d71b01e8e\") " pod="calico-system/whisker-856547cfbb-zlkzz" Nov 1 00:18:05.127038 kubelet[2502]: I1101 00:18:05.126974 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/927c6d85-0eb1-4656-97ff-085d71b01e8e-whisker-backend-key-pair\") pod \"whisker-856547cfbb-zlkzz\" (UID: \"927c6d85-0eb1-4656-97ff-085d71b01e8e\") " pod="calico-system/whisker-856547cfbb-zlkzz" Nov 1 00:18:05.384251 containerd[1471]: time="2025-11-01T00:18:05.384201170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-856547cfbb-zlkzz,Uid:927c6d85-0eb1-4656-97ff-085d71b01e8e,Namespace:calico-system,Attempt:0,}" Nov 1 00:18:05.533502 kubelet[2502]: I1101 00:18:05.533214 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221e198b-355f-49c3-b36d-9c4176619bae" path="/var/lib/kubelet/pods/221e198b-355f-49c3-b36d-9c4176619bae/volumes" Nov 1 00:18:05.650177 systemd-networkd[1359]: cali22b1e4408a5: Link UP Nov 1 00:18:05.651366 systemd-networkd[1359]: cali22b1e4408a5: Gained carrier Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.444 [INFO][3763] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.460 [INFO][3763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0 whisker-856547cfbb- calico-system 927c6d85-0eb1-4656-97ff-085d71b01e8e 966 0 2025-11-01 00:18:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:856547cfbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 whisker-856547cfbb-zlkzz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali22b1e4408a5 [] [] }} ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.461 [INFO][3763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.505 [INFO][3775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" HandleID="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.506 [INFO][3775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" HandleID="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"whisker-856547cfbb-zlkzz", "timestamp":"2025-11-01 00:18:05.505454574 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.507 [INFO][3775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.507 [INFO][3775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.507 [INFO][3775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.524 [INFO][3775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.535 [INFO][3775] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.566 [INFO][3775] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.569 [INFO][3775] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.580 [INFO][3775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.580 [INFO][3775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.583 [INFO][3775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.596 [INFO][3775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.604 [INFO][3775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.129/26] block=192.168.12.128/26 handle="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.605 [INFO][3775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.129/26] handle="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.605 [INFO][3775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:05.687160 containerd[1471]: 2025-11-01 00:18:05.605 [INFO][3775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.129/26] IPv6=[] ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" HandleID="k8s-pod-network.39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.689070 containerd[1471]: 2025-11-01 00:18:05.612 [INFO][3763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0", GenerateName:"whisker-856547cfbb-", Namespace:"calico-system", SelfLink:"", UID:"927c6d85-0eb1-4656-97ff-085d71b01e8e", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"856547cfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"whisker-856547cfbb-zlkzz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22b1e4408a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:05.689070 containerd[1471]: 2025-11-01 00:18:05.613 [INFO][3763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.129/32] ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.689070 containerd[1471]: 2025-11-01 00:18:05.613 [INFO][3763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22b1e4408a5 ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.689070 containerd[1471]: 2025-11-01 00:18:05.652 [INFO][3763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.689070 containerd[1471]: 2025-11-01 00:18:05.656 [INFO][3763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0", GenerateName:"whisker-856547cfbb-", Namespace:"calico-system", SelfLink:"", UID:"927c6d85-0eb1-4656-97ff-085d71b01e8e", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"856547cfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c", Pod:"whisker-856547cfbb-zlkzz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22b1e4408a5", MAC:"8a:96:da:c1:bb:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:05.689070 containerd[1471]: 2025-11-01 00:18:05.681 [INFO][3763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c" Namespace="calico-system" Pod="whisker-856547cfbb-zlkzz" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--856547cfbb--zlkzz-eth0" Nov 1 00:18:05.742985 containerd[1471]: time="2025-11-01T00:18:05.737109012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:05.743124 containerd[1471]: time="2025-11-01T00:18:05.743014614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:05.743124 containerd[1471]: time="2025-11-01T00:18:05.743050742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:05.743661 containerd[1471]: time="2025-11-01T00:18:05.743280212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:05.806365 systemd[1]: Started cri-containerd-39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c.scope - libcontainer container 39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c. Nov 1 00:18:05.926765 kubelet[2502]: E1101 00:18:05.925195 2502 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod927c6d85_0eb1_4656_97ff_085d71b01e8e.slice/cri-containerd-39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c.scope\": RecentStats: unable to find data in memory cache]" Nov 1 00:18:05.942443 containerd[1471]: time="2025-11-01T00:18:05.941961859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-856547cfbb-zlkzz,Uid:927c6d85-0eb1-4656-97ff-085d71b01e8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"39dd886c5073cd6cae391d27f67df4aa97049760cc9d4c1fc53d71720ca5676c\"" Nov 1 00:18:05.945679 containerd[1471]: time="2025-11-01T00:18:05.945307543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:18:06.193674 kernel: bpftool[3951]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:18:06.270076 containerd[1471]: time="2025-11-01T00:18:06.270016297Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:06.278157 containerd[1471]: time="2025-11-01T00:18:06.271150241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:18:06.278617 containerd[1471]: time="2025-11-01T00:18:06.271197478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:18:06.278716 kubelet[2502]: E1101 00:18:06.278508 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:06.283338 kubelet[2502]: E1101 00:18:06.283021 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:06.287732 kubelet[2502]: E1101 00:18:06.286447 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-856547cfbb-zlkzz_calico-system(927c6d85-0eb1-4656-97ff-085d71b01e8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:06.295403 containerd[1471]: time="2025-11-01T00:18:06.294038941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:18:06.483218 systemd-networkd[1359]: vxlan.calico: Link UP Nov 1 00:18:06.483227 systemd-networkd[1359]: vxlan.calico: Gained carrier Nov 1 00:18:06.616242 containerd[1471]: time="2025-11-01T00:18:06.615923026Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:06.618264 containerd[1471]: time="2025-11-01T00:18:06.617405544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:18:06.618264 containerd[1471]: time="2025-11-01T00:18:06.617490095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:06.618451 kubelet[2502]: E1101 00:18:06.617735 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:06.618451 kubelet[2502]: E1101 00:18:06.617803 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:06.618451 kubelet[2502]: E1101 00:18:06.617900 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-856547cfbb-zlkzz_calico-system(927c6d85-0eb1-4656-97ff-085d71b01e8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:06.618681 kubelet[2502]: E1101 00:18:06.617949 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-856547cfbb-zlkzz" podUID="927c6d85-0eb1-4656-97ff-085d71b01e8e" Nov 1 00:18:06.861663 kubelet[2502]: I1101 00:18:06.861076 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:18:06.861663 kubelet[2502]: E1101 00:18:06.861502 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:06.957311 kubelet[2502]: E1101 00:18:06.957116 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-856547cfbb-zlkzz" podUID="927c6d85-0eb1-4656-97ff-085d71b01e8e" Nov 1 00:18:07.390621 systemd-networkd[1359]: cali22b1e4408a5: Gained IPv6LL Nov 1 00:18:07.709819 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Nov 1 00:18:08.524857 containerd[1471]: time="2025-11-01T00:18:08.524806788Z" level=info msg="StopPodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\"" Nov 1 00:18:08.525373 containerd[1471]: time="2025-11-01T00:18:08.525185090Z" level=info msg="StopPodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\"" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.627 [INFO][4088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.628 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" iface="eth0" netns="/var/run/netns/cni-1c365283-0629-0111-d5ab-4da64ccc792f" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.628 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" iface="eth0" netns="/var/run/netns/cni-1c365283-0629-0111-d5ab-4da64ccc792f" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.628 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" iface="eth0" netns="/var/run/netns/cni-1c365283-0629-0111-d5ab-4da64ccc792f" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.628 [INFO][4088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.628 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.706 [INFO][4102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.707 [INFO][4102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.707 [INFO][4102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.714 [WARNING][4102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.715 [INFO][4102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.717 [INFO][4102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:08.728825 containerd[1471]: 2025-11-01 00:18:08.719 [INFO][4088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:08.728825 containerd[1471]: time="2025-11-01T00:18:08.728530679Z" level=info msg="TearDown network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" successfully" Nov 1 00:18:08.728825 containerd[1471]: time="2025-11-01T00:18:08.728578741Z" level=info msg="StopPodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" returns successfully" Nov 1 00:18:08.730764 systemd[1]: run-netns-cni\x2d1c365283\x2d0629\x2d0111\x2dd5ab\x2d4da64ccc792f.mount: Deactivated successfully. Nov 1 00:18:08.738308 kubelet[2502]: E1101 00:18:08.736852 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:08.738706 containerd[1471]: time="2025-11-01T00:18:08.737778941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzrmj,Uid:b8210a7f-2ccf-40c6-8962-23acfca85626,Namespace:kube-system,Attempt:1,}" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.631 [INFO][4089] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.632 [INFO][4089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" iface="eth0" netns="/var/run/netns/cni-e92e2525-b379-bdfd-415d-1fc60ebfe9fe" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.633 [INFO][4089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" iface="eth0" netns="/var/run/netns/cni-e92e2525-b379-bdfd-415d-1fc60ebfe9fe" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.634 [INFO][4089] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" iface="eth0" netns="/var/run/netns/cni-e92e2525-b379-bdfd-415d-1fc60ebfe9fe" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.634 [INFO][4089] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.634 [INFO][4089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.712 [INFO][4107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.713 [INFO][4107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.717 [INFO][4107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.732 [WARNING][4107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.732 [INFO][4107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.739 [INFO][4107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:08.745228 containerd[1471]: 2025-11-01 00:18:08.741 [INFO][4089] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:08.748032 containerd[1471]: time="2025-11-01T00:18:08.745387733Z" level=info msg="TearDown network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" successfully" Nov 1 00:18:08.748032 containerd[1471]: time="2025-11-01T00:18:08.745456596Z" level=info msg="StopPodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" returns successfully" Nov 1 00:18:08.749432 systemd[1]: run-netns-cni\x2de92e2525\x2db379\x2dbdfd\x2d415d\x2d1fc60ebfe9fe.mount: Deactivated successfully. Nov 1 00:18:08.751139 containerd[1471]: time="2025-11-01T00:18:08.751098433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntlzm,Uid:fcdc505d-5cce-492c-9f5d-b001efaf66ff,Namespace:calico-system,Attempt:1,}" Nov 1 00:18:08.934212 systemd-networkd[1359]: califec287b46de: Link UP Nov 1 00:18:08.935852 systemd-networkd[1359]: califec287b46de: Gained carrier Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.828 [INFO][4117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0 csi-node-driver- calico-system fcdc505d-5cce-492c-9f5d-b001efaf66ff 995 0 2025-11-01 00:17:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 csi-node-driver-ntlzm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califec287b46de [] [] }} ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.829 [INFO][4117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.870 [INFO][4140] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" HandleID="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.871 [INFO][4140] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" HandleID="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"csi-node-driver-ntlzm", "timestamp":"2025-11-01 00:18:08.870360275 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.871 [INFO][4140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.871 [INFO][4140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.872 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.883 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.889 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.895 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.898 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.901 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.901 [INFO][4140] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.903 [INFO][4140] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61 Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.909 [INFO][4140] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.917 [INFO][4140] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.130/26] block=192.168.12.128/26 handle="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.918 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.130/26] handle="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.918 [INFO][4140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:08.962145 containerd[1471]: 2025-11-01 00:18:08.918 [INFO][4140] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.130/26] IPv6=[] ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" HandleID="k8s-pod-network.92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.963106 containerd[1471]: 2025-11-01 00:18:08.923 [INFO][4117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcdc505d-5cce-492c-9f5d-b001efaf66ff", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"csi-node-driver-ntlzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califec287b46de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:08.963106 containerd[1471]: 2025-11-01 00:18:08.927 [INFO][4117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.130/32] ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.963106 containerd[1471]: 2025-11-01 00:18:08.927 [INFO][4117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califec287b46de ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.963106 containerd[1471]: 2025-11-01 00:18:08.936 [INFO][4117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:08.963106 containerd[1471]: 2025-11-01 00:18:08.937 [INFO][4117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcdc505d-5cce-492c-9f5d-b001efaf66ff", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61", Pod:"csi-node-driver-ntlzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califec287b46de", MAC:"3e:f6:3b:90:c0:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:08.963106 containerd[1471]: 2025-11-01 00:18:08.957 [INFO][4117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61" Namespace="calico-system" Pod="csi-node-driver-ntlzm" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:09.000119 containerd[1471]: time="2025-11-01T00:18:08.998284729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:09.000119 containerd[1471]: time="2025-11-01T00:18:08.998354108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:09.000119 containerd[1471]: time="2025-11-01T00:18:08.998370505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:09.000119 containerd[1471]: time="2025-11-01T00:18:08.998478827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:09.031865 systemd[1]: Started cri-containerd-92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61.scope - libcontainer container 92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61. Nov 1 00:18:09.065584 systemd-networkd[1359]: cali3aec64c5eb6: Link UP Nov 1 00:18:09.068271 systemd-networkd[1359]: cali3aec64c5eb6: Gained carrier Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.829 [INFO][4126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0 coredns-66bc5c9577- kube-system b8210a7f-2ccf-40c6-8962-23acfca85626 994 0 2025-11-01 00:17:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 coredns-66bc5c9577-xzrmj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3aec64c5eb6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.829 [INFO][4126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.877 [INFO][4145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" HandleID="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.878 [INFO][4145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" HandleID="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"coredns-66bc5c9577-xzrmj", "timestamp":"2025-11-01 00:18:08.877883276 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.878 [INFO][4145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.918 [INFO][4145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.919 [INFO][4145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.985 [INFO][4145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:08.997 [INFO][4145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.019 [INFO][4145] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.024 [INFO][4145] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.030 [INFO][4145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.030 [INFO][4145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.035 [INFO][4145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49 Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.041 [INFO][4145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.051 [INFO][4145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.131/26] block=192.168.12.128/26 handle="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.051 [INFO][4145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.131/26] handle="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.051 [INFO][4145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:09.095835 containerd[1471]: 2025-11-01 00:18:09.051 [INFO][4145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.131/26] IPv6=[] ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" HandleID="k8s-pod-network.ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.097251 containerd[1471]: 2025-11-01 00:18:09.056 [INFO][4126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b8210a7f-2ccf-40c6-8962-23acfca85626", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"coredns-66bc5c9577-xzrmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aec64c5eb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:09.097251 containerd[1471]: 2025-11-01 00:18:09.056 [INFO][4126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.131/32] ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.097251 containerd[1471]: 2025-11-01 00:18:09.057 [INFO][4126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3aec64c5eb6 ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.097251 containerd[1471]: 2025-11-01 00:18:09.070 [INFO][4126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.097251 containerd[1471]: 2025-11-01 00:18:09.072 [INFO][4126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b8210a7f-2ccf-40c6-8962-23acfca85626", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49", Pod:"coredns-66bc5c9577-xzrmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aec64c5eb6", MAC:"3e:9f:6c:6b:11:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:09.097499 containerd[1471]: 2025-11-01 00:18:09.089 [INFO][4126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49" Namespace="kube-system" Pod="coredns-66bc5c9577-xzrmj" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:09.099178 containerd[1471]: time="2025-11-01T00:18:09.098908896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ntlzm,Uid:fcdc505d-5cce-492c-9f5d-b001efaf66ff,Namespace:calico-system,Attempt:1,} returns sandbox id \"92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61\"" Nov 1 00:18:09.103232 containerd[1471]: time="2025-11-01T00:18:09.103194638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:18:09.130164 containerd[1471]: time="2025-11-01T00:18:09.129787411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:09.130164 containerd[1471]: time="2025-11-01T00:18:09.129863031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:09.130164 containerd[1471]: time="2025-11-01T00:18:09.129875309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:09.130164 containerd[1471]: time="2025-11-01T00:18:09.129970816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:09.152819 systemd[1]: Started cri-containerd-ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49.scope - libcontainer container ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49. Nov 1 00:18:09.211684 containerd[1471]: time="2025-11-01T00:18:09.211527660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzrmj,Uid:b8210a7f-2ccf-40c6-8962-23acfca85626,Namespace:kube-system,Attempt:1,} returns sandbox id \"ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49\"" Nov 1 00:18:09.214788 kubelet[2502]: E1101 00:18:09.213116 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:09.222112 containerd[1471]: time="2025-11-01T00:18:09.222064517Z" level=info msg="CreateContainer within sandbox \"ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:18:09.239538 containerd[1471]: time="2025-11-01T00:18:09.239476888Z" level=info msg="CreateContainer within sandbox \"ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98dfe83bd2ce0f98eaff6b77c0e5c0db74cbbee9f2e8639c33de7fa9efdc493f\"" Nov 1 00:18:09.240704 containerd[1471]: time="2025-11-01T00:18:09.240495154Z" level=info msg="StartContainer for \"98dfe83bd2ce0f98eaff6b77c0e5c0db74cbbee9f2e8639c33de7fa9efdc493f\"" Nov 1 00:18:09.273445 systemd[1]: Started cri-containerd-98dfe83bd2ce0f98eaff6b77c0e5c0db74cbbee9f2e8639c33de7fa9efdc493f.scope - libcontainer container 98dfe83bd2ce0f98eaff6b77c0e5c0db74cbbee9f2e8639c33de7fa9efdc493f. Nov 1 00:18:09.311050 containerd[1471]: time="2025-11-01T00:18:09.310987525Z" level=info msg="StartContainer for \"98dfe83bd2ce0f98eaff6b77c0e5c0db74cbbee9f2e8639c33de7fa9efdc493f\" returns successfully" Nov 1 00:18:09.414647 containerd[1471]: time="2025-11-01T00:18:09.414578795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:09.416474 containerd[1471]: time="2025-11-01T00:18:09.415598413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:18:09.416614 containerd[1471]: time="2025-11-01T00:18:09.415673664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:18:09.417542 kubelet[2502]: E1101 00:18:09.417498 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:09.418106 kubelet[2502]: E1101 00:18:09.417569 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:09.418106 kubelet[2502]: E1101 00:18:09.417684 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:09.419783 containerd[1471]: time="2025-11-01T00:18:09.419137069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:18:09.526414 containerd[1471]: time="2025-11-01T00:18:09.526040564Z" level=info msg="StopPodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\"" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.600 [INFO][4299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.601 [INFO][4299] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" iface="eth0" netns="/var/run/netns/cni-bee5e7f4-6152-5473-76d5-8c610aa74a8b" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.602 [INFO][4299] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" iface="eth0" netns="/var/run/netns/cni-bee5e7f4-6152-5473-76d5-8c610aa74a8b" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.603 [INFO][4299] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" iface="eth0" netns="/var/run/netns/cni-bee5e7f4-6152-5473-76d5-8c610aa74a8b" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.603 [INFO][4299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.603 [INFO][4299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.635 [INFO][4306] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.635 [INFO][4306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.635 [INFO][4306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.646 [WARNING][4306] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.646 [INFO][4306] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.649 [INFO][4306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:09.654298 containerd[1471]: 2025-11-01 00:18:09.651 [INFO][4299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:09.654298 containerd[1471]: time="2025-11-01T00:18:09.654233789Z" level=info msg="TearDown network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" successfully" Nov 1 00:18:09.654298 containerd[1471]: time="2025-11-01T00:18:09.654266037Z" level=info msg="StopPodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" returns successfully" Nov 1 00:18:09.660547 containerd[1471]: time="2025-11-01T00:18:09.659953020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-xs24n,Uid:385266d7-6e64-4f3b-97e7-b399fc11fb3c,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:18:09.732831 systemd[1]: run-netns-cni\x2dbee5e7f4\x2d6152\x2d5473\x2d76d5\x2d8c610aa74a8b.mount: Deactivated successfully. Nov 1 00:18:09.751069 containerd[1471]: time="2025-11-01T00:18:09.750775347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:09.751796 containerd[1471]: time="2025-11-01T00:18:09.751751488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:18:09.751995 containerd[1471]: time="2025-11-01T00:18:09.751864193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:18:09.752538 kubelet[2502]: E1101 00:18:09.752200 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:09.752538 kubelet[2502]: E1101 00:18:09.752276 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:09.752969 kubelet[2502]: E1101 00:18:09.752828 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:09.754730 kubelet[2502]: E1101 00:18:09.753475 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:18:09.816542 systemd-networkd[1359]: cali4ed7c19b9ca: Link UP Nov 1 00:18:09.817884 systemd-networkd[1359]: cali4ed7c19b9ca: Gained carrier Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.715 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0 calico-apiserver-849f5b77d5- calico-apiserver 385266d7-6e64-4f3b-97e7-b399fc11fb3c 1015 0 2025-11-01 00:17:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849f5b77d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 calico-apiserver-849f5b77d5-xs24n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4ed7c19b9ca [] [] }} ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.716 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.763 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" HandleID="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.763 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" HandleID="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"calico-apiserver-849f5b77d5-xs24n", "timestamp":"2025-11-01 00:18:09.763139551 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.763 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.763 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.763 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.773 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.781 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.787 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.790 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.792 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.792 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.795 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.799 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.810 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.132/26] block=192.168.12.128/26 handle="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.810 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.132/26] handle="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.810 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:09.840743 containerd[1471]: 2025-11-01 00:18:09.810 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.132/26] IPv6=[] ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" HandleID="k8s-pod-network.8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.843340 containerd[1471]: 2025-11-01 00:18:09.812 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"385266d7-6e64-4f3b-97e7-b399fc11fb3c", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"calico-apiserver-849f5b77d5-xs24n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ed7c19b9ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:09.843340 containerd[1471]: 2025-11-01 00:18:09.813 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.132/32] ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.843340 containerd[1471]: 2025-11-01 00:18:09.813 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ed7c19b9ca ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.843340 containerd[1471]: 2025-11-01 00:18:09.818 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.843340 containerd[1471]: 2025-11-01 00:18:09.820 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"385266d7-6e64-4f3b-97e7-b399fc11fb3c", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc", Pod:"calico-apiserver-849f5b77d5-xs24n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ed7c19b9ca", MAC:"ea:1b:94:c7:ae:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:09.843340 containerd[1471]: 2025-11-01 00:18:09.835 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-xs24n" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:09.883276 sshd[3668]: Connection closed by invalid user 134.199.207.61 port 56830 [preauth] Nov 1 00:18:09.884258 containerd[1471]: time="2025-11-01T00:18:09.883121293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:09.884258 containerd[1471]: time="2025-11-01T00:18:09.883811934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:09.884258 containerd[1471]: time="2025-11-01T00:18:09.883851057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:09.884258 containerd[1471]: time="2025-11-01T00:18:09.884080740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:09.893614 systemd[1]: sshd@8-146.190.126.63:22-134.199.207.61:56830.service: Deactivated successfully. Nov 1 00:18:09.928965 systemd[1]: Started cri-containerd-8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc.scope - libcontainer container 8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc. Nov 1 00:18:09.980654 kubelet[2502]: E1101 00:18:09.980343 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:18:10.007645 containerd[1471]: time="2025-11-01T00:18:10.004524362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-xs24n,Uid:385266d7-6e64-4f3b-97e7-b399fc11fb3c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc\"" Nov 1 00:18:10.012108 containerd[1471]: time="2025-11-01T00:18:10.012054118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:10.013076 kubelet[2502]: E1101 00:18:10.012982 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:10.013844 systemd-networkd[1359]: califec287b46de: Gained IPv6LL Nov 1 00:18:10.056746 kubelet[2502]: I1101 00:18:10.055697 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xzrmj" podStartSLOduration=40.055674561 podStartE2EDuration="40.055674561s" podCreationTimestamp="2025-11-01 00:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:10.034900743 +0000 UTC m=+44.698527765" watchObservedRunningTime="2025-11-01 00:18:10.055674561 +0000 UTC m=+44.719301586" Nov 1 00:18:10.332799 containerd[1471]: time="2025-11-01T00:18:10.332745887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:10.334390 systemd-networkd[1359]: cali3aec64c5eb6: Gained IPv6LL Nov 1 00:18:10.335100 containerd[1471]: time="2025-11-01T00:18:10.334343109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:10.336242 kubelet[2502]: E1101 00:18:10.334759 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:10.336242 kubelet[2502]: E1101 00:18:10.334815 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:10.336242 kubelet[2502]: E1101 00:18:10.334899 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849f5b77d5-xs24n_calico-apiserver(385266d7-6e64-4f3b-97e7-b399fc11fb3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:10.336242 kubelet[2502]: E1101 00:18:10.334932 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:18:10.336480 containerd[1471]: time="2025-11-01T00:18:10.335231115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:10.525305 containerd[1471]: time="2025-11-01T00:18:10.524451117Z" level=info msg="StopPodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\"" Nov 1 00:18:10.525305 containerd[1471]: time="2025-11-01T00:18:10.524716650Z" level=info msg="StopPodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\"" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.608 [INFO][4401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.608 [INFO][4401] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" iface="eth0" netns="/var/run/netns/cni-f7286ee2-6918-f067-fe2c-b48089546e3f" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.611 [INFO][4401] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" iface="eth0" netns="/var/run/netns/cni-f7286ee2-6918-f067-fe2c-b48089546e3f" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.612 [INFO][4401] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" iface="eth0" netns="/var/run/netns/cni-f7286ee2-6918-f067-fe2c-b48089546e3f" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.612 [INFO][4401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.612 [INFO][4401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.652 [INFO][4415] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.652 [INFO][4415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.652 [INFO][4415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.669 [WARNING][4415] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.671 [INFO][4415] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.673 [INFO][4415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:10.679724 containerd[1471]: 2025-11-01 00:18:10.675 [INFO][4401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:10.684069 containerd[1471]: time="2025-11-01T00:18:10.680112129Z" level=info msg="TearDown network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" successfully" Nov 1 00:18:10.684069 containerd[1471]: time="2025-11-01T00:18:10.680141490Z" level=info msg="StopPodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" returns successfully" Nov 1 00:18:10.683811 systemd[1]: run-netns-cni\x2df7286ee2\x2d6918\x2df067\x2dfe2c\x2db48089546e3f.mount: Deactivated successfully. Nov 1 00:18:10.686732 containerd[1471]: time="2025-11-01T00:18:10.686212956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f2w9v,Uid:b7c3b2ca-b43d-49de-8f54-e296a887af33,Namespace:calico-system,Attempt:1,}" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.624 [INFO][4405] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.624 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" iface="eth0" netns="/var/run/netns/cni-a5a4946f-ee18-ddc6-d658-299dabdad939" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.625 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" iface="eth0" netns="/var/run/netns/cni-a5a4946f-ee18-ddc6-d658-299dabdad939" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.626 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" iface="eth0" netns="/var/run/netns/cni-a5a4946f-ee18-ddc6-d658-299dabdad939" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.627 [INFO][4405] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.627 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.664 [INFO][4420] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.664 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.673 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.687 [WARNING][4420] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.687 [INFO][4420] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.690 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:10.695219 containerd[1471]: 2025-11-01 00:18:10.692 [INFO][4405] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:10.695684 containerd[1471]: time="2025-11-01T00:18:10.695448781Z" level=info msg="TearDown network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" successfully" Nov 1 00:18:10.695684 containerd[1471]: time="2025-11-01T00:18:10.695513591Z" level=info msg="StopPodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" returns successfully" Nov 1 00:18:10.700371 containerd[1471]: time="2025-11-01T00:18:10.700140942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-72jqw,Uid:158163d5-4372-43a9-8b56-d89943f06f09,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:18:10.735590 systemd[1]: run-netns-cni\x2da5a4946f\x2dee18\x2dddc6\x2dd658\x2d299dabdad939.mount: Deactivated successfully. Nov 1 00:18:10.866445 systemd-networkd[1359]: cali0d9a0b4878f: Link UP Nov 1 00:18:10.868583 systemd-networkd[1359]: cali0d9a0b4878f: Gained carrier Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.759 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0 goldmane-7c778bb748- calico-system b7c3b2ca-b43d-49de-8f54-e296a887af33 1040 0 2025-11-01 00:17:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 goldmane-7c778bb748-f2w9v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0d9a0b4878f [] [] }} ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.761 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.804 [INFO][4451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" HandleID="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.805 [INFO][4451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" HandleID="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"goldmane-7c778bb748-f2w9v", "timestamp":"2025-11-01 00:18:10.804669289 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.805 [INFO][4451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.805 [INFO][4451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.805 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.814 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.822 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.830 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.834 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.837 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.837 [INFO][4451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.843 [INFO][4451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.848 [INFO][4451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.857 [INFO][4451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.133/26] block=192.168.12.128/26 handle="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.857 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.133/26] handle="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.857 [INFO][4451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:10.885740 containerd[1471]: 2025-11-01 00:18:10.857 [INFO][4451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.133/26] IPv6=[] ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" HandleID="k8s-pod-network.b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.886399 containerd[1471]: 2025-11-01 00:18:10.860 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b7c3b2ca-b43d-49de-8f54-e296a887af33", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"goldmane-7c778bb748-f2w9v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d9a0b4878f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:10.886399 containerd[1471]: 2025-11-01 00:18:10.860 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.133/32] ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.886399 containerd[1471]: 2025-11-01 00:18:10.860 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d9a0b4878f ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.886399 containerd[1471]: 2025-11-01 00:18:10.868 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.886399 containerd[1471]: 2025-11-01 00:18:10.869 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b7c3b2ca-b43d-49de-8f54-e296a887af33", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd", Pod:"goldmane-7c778bb748-f2w9v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d9a0b4878f", MAC:"02:b8:23:e3:4e:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:10.886399 containerd[1471]: 2025-11-01 00:18:10.883 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd" Namespace="calico-system" Pod="goldmane-7c778bb748-f2w9v" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:10.922302 containerd[1471]: time="2025-11-01T00:18:10.920615102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:10.922302 containerd[1471]: time="2025-11-01T00:18:10.921491341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:10.922302 containerd[1471]: time="2025-11-01T00:18:10.921506968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:10.922302 containerd[1471]: time="2025-11-01T00:18:10.921618057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:10.962350 systemd[1]: Started cri-containerd-b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd.scope - libcontainer container b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd. Nov 1 00:18:10.988189 systemd-networkd[1359]: calif1ed77b45ba: Link UP Nov 1 00:18:10.990326 systemd-networkd[1359]: calif1ed77b45ba: Gained carrier Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.787 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0 calico-apiserver-849f5b77d5- calico-apiserver 158163d5-4372-43a9-8b56-d89943f06f09 1041 0 2025-11-01 00:17:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849f5b77d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 calico-apiserver-849f5b77d5-72jqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1ed77b45ba [] [] }} ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.787 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.842 [INFO][4459] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" HandleID="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.842 [INFO][4459] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" HandleID="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"calico-apiserver-849f5b77d5-72jqw", "timestamp":"2025-11-01 00:18:10.842202406 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.842 [INFO][4459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.857 [INFO][4459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.857 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.915 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.925 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.936 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.941 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.946 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.946 [INFO][4459] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.953 [INFO][4459] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.961 [INFO][4459] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.978 [INFO][4459] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.134/26] block=192.168.12.128/26 handle="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.978 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.134/26] handle="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.978 [INFO][4459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:11.012899 containerd[1471]: 2025-11-01 00:18:10.978 [INFO][4459] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.134/26] IPv6=[] ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" HandleID="k8s-pod-network.6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.013565 containerd[1471]: 2025-11-01 00:18:10.983 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"158163d5-4372-43a9-8b56-d89943f06f09", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"calico-apiserver-849f5b77d5-72jqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1ed77b45ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:11.013565 containerd[1471]: 2025-11-01 00:18:10.984 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.134/32] ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.013565 containerd[1471]: 2025-11-01 00:18:10.984 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1ed77b45ba ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.013565 containerd[1471]: 2025-11-01 00:18:10.990 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.013565 containerd[1471]: 2025-11-01 00:18:10.992 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"158163d5-4372-43a9-8b56-d89943f06f09", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b", Pod:"calico-apiserver-849f5b77d5-72jqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1ed77b45ba", MAC:"f2:33:b5:df:44:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:11.013565 containerd[1471]: 2025-11-01 00:18:11.009 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b" Namespace="calico-apiserver" Pod="calico-apiserver-849f5b77d5-72jqw" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:11.037403 kubelet[2502]: E1101 00:18:11.037229 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:18:11.066915 kubelet[2502]: E1101 00:18:11.065440 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:11.071918 kubelet[2502]: E1101 00:18:11.071860 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:18:11.104715 systemd-networkd[1359]: cali4ed7c19b9ca: Gained IPv6LL Nov 1 00:18:11.110585 containerd[1471]: time="2025-11-01T00:18:11.109890107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:11.110585 containerd[1471]: time="2025-11-01T00:18:11.109979470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:11.110585 containerd[1471]: time="2025-11-01T00:18:11.109992265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:11.113074 containerd[1471]: time="2025-11-01T00:18:11.110874917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:11.163141 systemd[1]: Started cri-containerd-6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b.scope - libcontainer container 6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b. Nov 1 00:18:11.195491 containerd[1471]: time="2025-11-01T00:18:11.195309245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f2w9v,Uid:b7c3b2ca-b43d-49de-8f54-e296a887af33,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd\"" Nov 1 00:18:11.198658 containerd[1471]: time="2025-11-01T00:18:11.198404105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:18:11.243245 containerd[1471]: time="2025-11-01T00:18:11.243035506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849f5b77d5-72jqw,Uid:158163d5-4372-43a9-8b56-d89943f06f09,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b\"" Nov 1 00:18:11.513588 containerd[1471]: time="2025-11-01T00:18:11.513326206Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:11.514536 containerd[1471]: time="2025-11-01T00:18:11.514381659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:18:11.514536 containerd[1471]: time="2025-11-01T00:18:11.514409399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:11.515000 kubelet[2502]: E1101 00:18:11.514865 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:11.515094 kubelet[2502]: E1101 00:18:11.515005 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:11.515205 kubelet[2502]: E1101 00:18:11.515182 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f2w9v_calico-system(b7c3b2ca-b43d-49de-8f54-e296a887af33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:11.515259 kubelet[2502]: E1101 00:18:11.515229 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:18:11.518057 containerd[1471]: time="2025-11-01T00:18:11.517737635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:11.528182 containerd[1471]: time="2025-11-01T00:18:11.528132596Z" level=info msg="StopPodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\"" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.602 [INFO][4578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.604 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" iface="eth0" netns="/var/run/netns/cni-8d7c0362-3b99-f370-0234-0aaf7da089a9" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.605 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" iface="eth0" netns="/var/run/netns/cni-8d7c0362-3b99-f370-0234-0aaf7da089a9" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.608 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" iface="eth0" netns="/var/run/netns/cni-8d7c0362-3b99-f370-0234-0aaf7da089a9" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.610 [INFO][4578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.610 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.647 [INFO][4585] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.647 [INFO][4585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.647 [INFO][4585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.656 [WARNING][4585] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.656 [INFO][4585] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.658 [INFO][4585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:11.663705 containerd[1471]: 2025-11-01 00:18:11.661 [INFO][4578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:11.664730 containerd[1471]: time="2025-11-01T00:18:11.664429382Z" level=info msg="TearDown network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" successfully" Nov 1 00:18:11.664730 containerd[1471]: time="2025-11-01T00:18:11.664474451Z" level=info msg="StopPodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" returns successfully" Nov 1 00:18:11.668372 kubelet[2502]: E1101 00:18:11.668318 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:11.669844 containerd[1471]: time="2025-11-01T00:18:11.669029560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvmpd,Uid:097de2b5-b860-413b-9296-b00cb2127d6e,Namespace:kube-system,Attempt:1,}" Nov 1 00:18:11.736111 systemd[1]: run-netns-cni\x2d8d7c0362\x2d3b99\x2df370\x2d0234\x2d0aaf7da089a9.mount: Deactivated successfully. Nov 1 00:18:11.829053 systemd-networkd[1359]: cali0686454bc7a: Link UP Nov 1 00:18:11.832844 containerd[1471]: time="2025-11-01T00:18:11.831895481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:11.829339 systemd-networkd[1359]: cali0686454bc7a: Gained carrier Nov 1 00:18:11.837025 containerd[1471]: time="2025-11-01T00:18:11.833414420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:11.837025 containerd[1471]: time="2025-11-01T00:18:11.833444673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:11.837184 kubelet[2502]: E1101 00:18:11.834532 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:11.837184 kubelet[2502]: E1101 00:18:11.836746 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:11.837576 kubelet[2502]: E1101 00:18:11.836917 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849f5b77d5-72jqw_calico-apiserver(158163d5-4372-43a9-8b56-d89943f06f09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:11.837576 kubelet[2502]: E1101 00:18:11.837421 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.723 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0 coredns-66bc5c9577- kube-system 097de2b5-b860-413b-9296-b00cb2127d6e 1067 0 2025-11-01 00:17:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 coredns-66bc5c9577-dvmpd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0686454bc7a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.723 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.766 [INFO][4605] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" HandleID="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.766 [INFO][4605] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" HandleID="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"coredns-66bc5c9577-dvmpd", "timestamp":"2025-11-01 00:18:11.766436848 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.766 [INFO][4605] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.766 [INFO][4605] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.766 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.777 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.785 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.791 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.794 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.798 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.798 [INFO][4605] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.801 [INFO][4605] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.808 [INFO][4605] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.819 [INFO][4605] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.135/26] block=192.168.12.128/26 handle="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.819 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.135/26] handle="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.819 [INFO][4605] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:11.852205 containerd[1471]: 2025-11-01 00:18:11.819 [INFO][4605] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.135/26] IPv6=[] ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" HandleID="k8s-pod-network.91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.852914 containerd[1471]: 2025-11-01 00:18:11.823 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"097de2b5-b860-413b-9296-b00cb2127d6e", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"coredns-66bc5c9577-dvmpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0686454bc7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:11.852914 containerd[1471]: 2025-11-01 00:18:11.823 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.135/32] ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.852914 containerd[1471]: 2025-11-01 00:18:11.823 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0686454bc7a ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.852914 containerd[1471]: 2025-11-01 00:18:11.827 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.852914 containerd[1471]: 2025-11-01 00:18:11.827 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"097de2b5-b860-413b-9296-b00cb2127d6e", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a", Pod:"coredns-66bc5c9577-dvmpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0686454bc7a", MAC:"32:24:b2:1a:69:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:11.853136 containerd[1471]: 2025-11-01 00:18:11.846 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a" Namespace="kube-system" Pod="coredns-66bc5c9577-dvmpd" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:11.891035 containerd[1471]: time="2025-11-01T00:18:11.890596189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:11.891035 containerd[1471]: time="2025-11-01T00:18:11.890725121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:11.891035 containerd[1471]: time="2025-11-01T00:18:11.890787810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:11.891302 containerd[1471]: time="2025-11-01T00:18:11.891142619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:11.928860 systemd[1]: Started cri-containerd-91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a.scope - libcontainer container 91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a. Nov 1 00:18:11.992379 containerd[1471]: time="2025-11-01T00:18:11.992319341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dvmpd,Uid:097de2b5-b860-413b-9296-b00cb2127d6e,Namespace:kube-system,Attempt:1,} returns sandbox id \"91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a\"" Nov 1 00:18:11.995662 kubelet[2502]: E1101 00:18:11.994391 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:11.998335 systemd-networkd[1359]: cali0d9a0b4878f: Gained IPv6LL Nov 1 00:18:12.006467 containerd[1471]: time="2025-11-01T00:18:12.006425291Z" level=info msg="CreateContainer within sandbox \"91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:18:12.023298 containerd[1471]: time="2025-11-01T00:18:12.023236301Z" level=info msg="CreateContainer within sandbox \"91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c46fcfbadeec6b3fa96e69e44a434e40a65d9f7ca466151f235743b034ee6ba\"" Nov 1 00:18:12.024459 containerd[1471]: time="2025-11-01T00:18:12.024330460Z" level=info msg="StartContainer for \"3c46fcfbadeec6b3fa96e69e44a434e40a65d9f7ca466151f235743b034ee6ba\"" Nov 1 00:18:12.042208 kubelet[2502]: E1101 00:18:12.042105 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:18:12.059568 kubelet[2502]: E1101 00:18:12.058969 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:12.062269 kubelet[2502]: E1101 00:18:12.062018 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:12.063252 kubelet[2502]: E1101 00:18:12.062952 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:18:12.089889 systemd[1]: Started cri-containerd-3c46fcfbadeec6b3fa96e69e44a434e40a65d9f7ca466151f235743b034ee6ba.scope - libcontainer container 3c46fcfbadeec6b3fa96e69e44a434e40a65d9f7ca466151f235743b034ee6ba. Nov 1 00:18:12.164702 containerd[1471]: time="2025-11-01T00:18:12.164599167Z" level=info msg="StartContainer for \"3c46fcfbadeec6b3fa96e69e44a434e40a65d9f7ca466151f235743b034ee6ba\" returns successfully" Nov 1 00:18:12.527133 containerd[1471]: time="2025-11-01T00:18:12.526034224Z" level=info msg="StopPodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\"" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.617 [INFO][4718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.618 [INFO][4718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" iface="eth0" netns="/var/run/netns/cni-c50ad29c-a0bf-c01b-40da-5a0c7cf9da3d" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.620 [INFO][4718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" iface="eth0" netns="/var/run/netns/cni-c50ad29c-a0bf-c01b-40da-5a0c7cf9da3d" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.622 [INFO][4718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" iface="eth0" netns="/var/run/netns/cni-c50ad29c-a0bf-c01b-40da-5a0c7cf9da3d" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.623 [INFO][4718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.623 [INFO][4718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.677 [INFO][4725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.678 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.678 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.688 [WARNING][4725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.688 [INFO][4725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.690 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:12.697780 containerd[1471]: 2025-11-01 00:18:12.694 [INFO][4718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:12.698275 containerd[1471]: time="2025-11-01T00:18:12.698053437Z" level=info msg="TearDown network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" successfully" Nov 1 00:18:12.698275 containerd[1471]: time="2025-11-01T00:18:12.698180937Z" level=info msg="StopPodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" returns successfully" Nov 1 00:18:12.702491 containerd[1471]: time="2025-11-01T00:18:12.702429678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fc444b969-zmtxl,Uid:88514d97-6a8a-4349-b2ae-0a411d3ab2a9,Namespace:calico-system,Attempt:1,}" Nov 1 00:18:12.733867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014032916.mount: Deactivated successfully. Nov 1 00:18:12.734143 systemd[1]: run-netns-cni\x2dc50ad29c\x2da0bf\x2dc01b\x2d40da\x2d5a0c7cf9da3d.mount: Deactivated successfully. Nov 1 00:18:12.767122 systemd-networkd[1359]: calif1ed77b45ba: Gained IPv6LL Nov 1 00:18:12.902584 systemd-networkd[1359]: cali1f2623f4b5e: Link UP Nov 1 00:18:12.907180 systemd-networkd[1359]: cali1f2623f4b5e: Gained carrier Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.787 [INFO][4733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0 calico-kube-controllers-fc444b969- calico-system 88514d97-6a8a-4349-b2ae-0a411d3ab2a9 1093 0 2025-11-01 00:17:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fc444b969 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-62dab69cc5 calico-kube-controllers-fc444b969-zmtxl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f2623f4b5e [] [] }} ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.788 [INFO][4733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.837 [INFO][4744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" HandleID="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.838 [INFO][4744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" HandleID="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-62dab69cc5", "pod":"calico-kube-controllers-fc444b969-zmtxl", "timestamp":"2025-11-01 00:18:12.837575308 +0000 UTC"}, Hostname:"ci-4081.3.6-n-62dab69cc5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.838 [INFO][4744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.838 [INFO][4744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.838 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-62dab69cc5' Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.849 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.856 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.866 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.869 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.872 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.872 [INFO][4744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.874 [INFO][4744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5 Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.879 [INFO][4744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.890 [INFO][4744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.136/26] block=192.168.12.128/26 handle="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.891 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.136/26] handle="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" host="ci-4081.3.6-n-62dab69cc5" Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.891 [INFO][4744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:12.934958 containerd[1471]: 2025-11-01 00:18:12.891 [INFO][4744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.136/26] IPv6=[] ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" HandleID="k8s-pod-network.dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.936992 containerd[1471]: 2025-11-01 00:18:12.893 [INFO][4733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0", GenerateName:"calico-kube-controllers-fc444b969-", Namespace:"calico-system", SelfLink:"", UID:"88514d97-6a8a-4349-b2ae-0a411d3ab2a9", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fc444b969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"", Pod:"calico-kube-controllers-fc444b969-zmtxl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f2623f4b5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:12.936992 containerd[1471]: 2025-11-01 00:18:12.893 [INFO][4733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.136/32] ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.936992 containerd[1471]: 2025-11-01 00:18:12.893 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f2623f4b5e ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.936992 containerd[1471]: 2025-11-01 00:18:12.900 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.936992 containerd[1471]: 2025-11-01 00:18:12.906 [INFO][4733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0", GenerateName:"calico-kube-controllers-fc444b969-", Namespace:"calico-system", SelfLink:"", UID:"88514d97-6a8a-4349-b2ae-0a411d3ab2a9", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fc444b969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5", Pod:"calico-kube-controllers-fc444b969-zmtxl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f2623f4b5e", MAC:"9a:af:ef:4c:ec:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:12.936992 containerd[1471]: 2025-11-01 00:18:12.927 [INFO][4733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5" Namespace="calico-system" Pod="calico-kube-controllers-fc444b969-zmtxl" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:12.981113 containerd[1471]: time="2025-11-01T00:18:12.980906646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:12.981113 containerd[1471]: time="2025-11-01T00:18:12.981045758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:12.981113 containerd[1471]: time="2025-11-01T00:18:12.981072845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:12.983869 containerd[1471]: time="2025-11-01T00:18:12.983574433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:13.023111 systemd[1]: Started cri-containerd-dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5.scope - libcontainer container dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5. Nov 1 00:18:13.067271 kubelet[2502]: E1101 00:18:13.067222 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:13.069527 kubelet[2502]: E1101 00:18:13.068969 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:18:13.069770 kubelet[2502]: E1101 00:18:13.069488 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:13.136822 containerd[1471]: time="2025-11-01T00:18:13.136766185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fc444b969-zmtxl,Uid:88514d97-6a8a-4349-b2ae-0a411d3ab2a9,Namespace:calico-system,Attempt:1,} returns sandbox id \"dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5\"" Nov 1 00:18:13.141108 containerd[1471]: time="2025-11-01T00:18:13.140814380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:18:13.453701 containerd[1471]: time="2025-11-01T00:18:13.453564042Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:13.454504 containerd[1471]: time="2025-11-01T00:18:13.454435914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:18:13.454621 containerd[1471]: time="2025-11-01T00:18:13.454545695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:13.454864 kubelet[2502]: E1101 00:18:13.454812 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:13.454935 kubelet[2502]: E1101 00:18:13.454874 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:13.455120 kubelet[2502]: E1101 00:18:13.454975 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-fc444b969-zmtxl_calico-system(88514d97-6a8a-4349-b2ae-0a411d3ab2a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:13.455120 kubelet[2502]: E1101 00:18:13.455013 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:18:13.662036 systemd-networkd[1359]: cali0686454bc7a: Gained IPv6LL Nov 1 00:18:14.075678 kubelet[2502]: E1101 00:18:14.073459 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:14.078305 kubelet[2502]: E1101 00:18:14.078070 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:18:14.105749 kubelet[2502]: I1101 00:18:14.104981 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dvmpd" podStartSLOduration=44.104952277 podStartE2EDuration="44.104952277s" podCreationTimestamp="2025-11-01 00:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:13.154914519 +0000 UTC m=+47.818541550" watchObservedRunningTime="2025-11-01 00:18:14.104952277 +0000 UTC m=+48.768579311" Nov 1 00:18:14.558019 systemd-networkd[1359]: cali1f2623f4b5e: Gained IPv6LL Nov 1 00:18:15.075962 kubelet[2502]: E1101 00:18:15.075913 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:15.078308 kubelet[2502]: E1101 00:18:15.078267 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:18:16.079525 kubelet[2502]: E1101 00:18:16.078983 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:18.527444 containerd[1471]: time="2025-11-01T00:18:18.527387705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:18:18.827748 systemd[1]: Started sshd@9-146.190.126.63:22-139.178.68.195:38566.service - OpenSSH per-connection server daemon (139.178.68.195:38566). Nov 1 00:18:18.844733 containerd[1471]: time="2025-11-01T00:18:18.844461778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:18.848315 containerd[1471]: time="2025-11-01T00:18:18.847457943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:18:18.848507 containerd[1471]: time="2025-11-01T00:18:18.847592176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:18:18.848950 kubelet[2502]: E1101 00:18:18.848906 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:18.849284 kubelet[2502]: E1101 00:18:18.848959 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:18.849284 kubelet[2502]: E1101 00:18:18.849043 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-856547cfbb-zlkzz_calico-system(927c6d85-0eb1-4656-97ff-085d71b01e8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:18.851444 containerd[1471]: time="2025-11-01T00:18:18.851373969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:18:18.913437 sshd[4818]: Accepted publickey for core from 139.178.68.195 port 38566 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:18.915468 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:18.921280 systemd-logind[1445]: New session 8 of user core. Nov 1 00:18:18.928035 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:18:19.177363 containerd[1471]: time="2025-11-01T00:18:19.176929379Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:19.178579 containerd[1471]: time="2025-11-01T00:18:19.178288719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:18:19.178579 containerd[1471]: time="2025-11-01T00:18:19.178508285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:19.180179 kubelet[2502]: E1101 00:18:19.179251 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:19.180179 kubelet[2502]: E1101 00:18:19.179315 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:19.180179 kubelet[2502]: E1101 00:18:19.179421 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-856547cfbb-zlkzz_calico-system(927c6d85-0eb1-4656-97ff-085d71b01e8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:19.181394 kubelet[2502]: E1101 00:18:19.179493 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-856547cfbb-zlkzz" podUID="927c6d85-0eb1-4656-97ff-085d71b01e8e" Nov 1 00:18:19.539867 sshd[4818]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:19.545219 systemd[1]: sshd@9-146.190.126.63:22-139.178.68.195:38566.service: Deactivated successfully. Nov 1 00:18:19.549045 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:18:19.550305 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:18:19.551953 systemd-logind[1445]: Removed session 8. Nov 1 00:18:22.525810 containerd[1471]: time="2025-11-01T00:18:22.525682096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:18:22.828384 containerd[1471]: time="2025-11-01T00:18:22.828216999Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:22.829304 containerd[1471]: time="2025-11-01T00:18:22.829253718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:18:22.830311 containerd[1471]: time="2025-11-01T00:18:22.829392694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:18:22.830417 kubelet[2502]: E1101 00:18:22.829535 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:22.830417 kubelet[2502]: E1101 00:18:22.829599 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:22.830417 kubelet[2502]: E1101 00:18:22.829727 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:22.832191 containerd[1471]: time="2025-11-01T00:18:22.832091740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:18:23.125459 containerd[1471]: time="2025-11-01T00:18:23.125316917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:23.127000 containerd[1471]: time="2025-11-01T00:18:23.126765132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:18:23.127000 containerd[1471]: time="2025-11-01T00:18:23.126853280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:18:23.127511 kubelet[2502]: E1101 00:18:23.127253 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:23.127511 kubelet[2502]: E1101 00:18:23.127314 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:23.127511 kubelet[2502]: E1101 00:18:23.127401 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:23.127710 kubelet[2502]: E1101 00:18:23.127451 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:18:23.526686 containerd[1471]: time="2025-11-01T00:18:23.526328049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:18:23.831303 containerd[1471]: time="2025-11-01T00:18:23.831130231Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:23.832212 containerd[1471]: time="2025-11-01T00:18:23.832138502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:18:23.832324 containerd[1471]: time="2025-11-01T00:18:23.832236084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:23.833052 kubelet[2502]: E1101 00:18:23.832988 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:23.833432 kubelet[2502]: E1101 00:18:23.833054 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:23.833432 kubelet[2502]: E1101 00:18:23.833203 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f2w9v_calico-system(b7c3b2ca-b43d-49de-8f54-e296a887af33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:23.833432 kubelet[2502]: E1101 00:18:23.833237 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:18:24.562061 systemd[1]: Started sshd@10-146.190.126.63:22-139.178.68.195:60842.service - OpenSSH per-connection server daemon (139.178.68.195:60842). Nov 1 00:18:24.627705 sshd[4836]: Accepted publickey for core from 139.178.68.195 port 60842 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:24.631037 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:24.642043 systemd-logind[1445]: New session 9 of user core. Nov 1 00:18:24.650163 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:18:24.810759 sshd[4836]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:24.819902 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:18:24.820398 systemd[1]: sshd@10-146.190.126.63:22-139.178.68.195:60842.service: Deactivated successfully. Nov 1 00:18:24.823203 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:18:24.826464 systemd-logind[1445]: Removed session 9. Nov 1 00:18:25.547356 containerd[1471]: time="2025-11-01T00:18:25.547296382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:18:25.553161 containerd[1471]: time="2025-11-01T00:18:25.553108294Z" level=info msg="StopPodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\"" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.647 [WARNING][4859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"158163d5-4372-43a9-8b56-d89943f06f09", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b", Pod:"calico-apiserver-849f5b77d5-72jqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1ed77b45ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.647 [INFO][4859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.647 [INFO][4859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" iface="eth0" netns="" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.648 [INFO][4859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.648 [INFO][4859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.697 [INFO][4866] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.697 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.697 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.704 [WARNING][4866] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.704 [INFO][4866] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.706 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:25.711357 containerd[1471]: 2025-11-01 00:18:25.709 [INFO][4859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.711357 containerd[1471]: time="2025-11-01T00:18:25.711115166Z" level=info msg="TearDown network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" successfully" Nov 1 00:18:25.711357 containerd[1471]: time="2025-11-01T00:18:25.711153734Z" level=info msg="StopPodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" returns successfully" Nov 1 00:18:25.712408 containerd[1471]: time="2025-11-01T00:18:25.712274000Z" level=info msg="RemovePodSandbox for \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\"" Nov 1 00:18:25.712408 containerd[1471]: time="2025-11-01T00:18:25.712317801Z" level=info msg="Forcibly stopping sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\"" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.752 [WARNING][4880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"158163d5-4372-43a9-8b56-d89943f06f09", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"6c0db789421bf23259a313b873bd6cb5aa2bec95169cdc51afad91c7f0b6738b", Pod:"calico-apiserver-849f5b77d5-72jqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1ed77b45ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.752 [INFO][4880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.752 [INFO][4880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" iface="eth0" netns="" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.752 [INFO][4880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.752 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.783 [INFO][4887] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.783 [INFO][4887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.783 [INFO][4887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.793 [WARNING][4887] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.793 [INFO][4887] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" HandleID="k8s-pod-network.46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--72jqw-eth0" Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.795 [INFO][4887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:25.799697 containerd[1471]: 2025-11-01 00:18:25.797 [INFO][4880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0" Nov 1 00:18:25.799697 containerd[1471]: time="2025-11-01T00:18:25.799659171Z" level=info msg="TearDown network for sandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" successfully" Nov 1 00:18:25.813227 containerd[1471]: time="2025-11-01T00:18:25.813131192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:25.813371 containerd[1471]: time="2025-11-01T00:18:25.813249786Z" level=info msg="RemovePodSandbox \"46a2a846c6fc2753cb86ed156ad17e83d80e2874f00552ee55150ee5908d44c0\" returns successfully" Nov 1 00:18:25.814343 containerd[1471]: time="2025-11-01T00:18:25.814222051Z" level=info msg="StopPodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\"" Nov 1 00:18:25.858655 containerd[1471]: time="2025-11-01T00:18:25.858444859Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:25.860912 containerd[1471]: time="2025-11-01T00:18:25.860073661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:18:25.861267 containerd[1471]: time="2025-11-01T00:18:25.860443451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:25.861748 kubelet[2502]: E1101 00:18:25.861539 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:25.861748 kubelet[2502]: E1101 00:18:25.861621 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:25.863848 kubelet[2502]: E1101 00:18:25.861917 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-fc444b969-zmtxl_calico-system(88514d97-6a8a-4349-b2ae-0a411d3ab2a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:25.863848 kubelet[2502]: E1101 00:18:25.861964 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:18:25.864058 containerd[1471]: time="2025-11-01T00:18:25.863307625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.865 [WARNING][4901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcdc505d-5cce-492c-9f5d-b001efaf66ff", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61", Pod:"csi-node-driver-ntlzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califec287b46de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.866 [INFO][4901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.866 [INFO][4901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" iface="eth0" netns="" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.866 [INFO][4901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.866 [INFO][4901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.897 [INFO][4908] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.897 [INFO][4908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.897 [INFO][4908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.905 [WARNING][4908] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.905 [INFO][4908] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.908 [INFO][4908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:25.915448 containerd[1471]: 2025-11-01 00:18:25.912 [INFO][4901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:25.915985 containerd[1471]: time="2025-11-01T00:18:25.915538226Z" level=info msg="TearDown network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" successfully" Nov 1 00:18:25.915985 containerd[1471]: time="2025-11-01T00:18:25.915579902Z" level=info msg="StopPodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" returns successfully" Nov 1 00:18:25.916716 containerd[1471]: time="2025-11-01T00:18:25.916239268Z" level=info msg="RemovePodSandbox for \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\"" Nov 1 00:18:25.916716 containerd[1471]: time="2025-11-01T00:18:25.916289640Z" level=info msg="Forcibly stopping sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\"" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.959 [WARNING][4922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fcdc505d-5cce-492c-9f5d-b001efaf66ff", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"92cd0587fb4a09ed33f234e486d5fc2a0db0cd2de5b5ffb75e06396389b31a61", Pod:"csi-node-driver-ntlzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califec287b46de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.960 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.960 [INFO][4922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" iface="eth0" netns="" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.960 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.960 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.996 [INFO][4929] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.996 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:25.996 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:26.008 [WARNING][4929] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:26.008 [INFO][4929] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" HandleID="k8s-pod-network.781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Workload="ci--4081.3.6--n--62dab69cc5-k8s-csi--node--driver--ntlzm-eth0" Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:26.010 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.013791 containerd[1471]: 2025-11-01 00:18:26.011 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa" Nov 1 00:18:26.014528 containerd[1471]: time="2025-11-01T00:18:26.014153339Z" level=info msg="TearDown network for sandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" successfully" Nov 1 00:18:26.018070 containerd[1471]: time="2025-11-01T00:18:26.017663002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:26.018070 containerd[1471]: time="2025-11-01T00:18:26.017797375Z" level=info msg="RemovePodSandbox \"781ab42c2989ef4e66ae71fb37562232996e104cd8defa4261d43b0aa4122bfa\" returns successfully" Nov 1 00:18:26.018654 containerd[1471]: time="2025-11-01T00:18:26.018579530Z" level=info msg="StopPodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\"" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.060 [WARNING][4943] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"097de2b5-b860-413b-9296-b00cb2127d6e", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a", Pod:"coredns-66bc5c9577-dvmpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0686454bc7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.060 [INFO][4943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.060 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" iface="eth0" netns="" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.060 [INFO][4943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.060 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.086 [INFO][4950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.086 [INFO][4950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.087 [INFO][4950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.094 [WARNING][4950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.094 [INFO][4950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.098 [INFO][4950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.102574 containerd[1471]: 2025-11-01 00:18:26.100 [INFO][4943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.102574 containerd[1471]: time="2025-11-01T00:18:26.102459153Z" level=info msg="TearDown network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" successfully" Nov 1 00:18:26.102574 containerd[1471]: time="2025-11-01T00:18:26.102486333Z" level=info msg="StopPodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" returns successfully" Nov 1 00:18:26.104324 containerd[1471]: time="2025-11-01T00:18:26.102997651Z" level=info msg="RemovePodSandbox for \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\"" Nov 1 00:18:26.104324 containerd[1471]: time="2025-11-01T00:18:26.103036598Z" level=info msg="Forcibly stopping sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\"" Nov 1 00:18:26.174220 containerd[1471]: time="2025-11-01T00:18:26.173839431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:26.175910 containerd[1471]: time="2025-11-01T00:18:26.175007140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:26.175910 containerd[1471]: time="2025-11-01T00:18:26.175098545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:26.176124 kubelet[2502]: E1101 00:18:26.175290 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:26.176124 kubelet[2502]: E1101 00:18:26.175346 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:26.176124 kubelet[2502]: E1101 00:18:26.175447 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849f5b77d5-72jqw_calico-apiserver(158163d5-4372-43a9-8b56-d89943f06f09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:26.176124 kubelet[2502]: E1101 00:18:26.175482 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.150 [WARNING][4964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"097de2b5-b860-413b-9296-b00cb2127d6e", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"91e263b98fad040a659e6940c3835b586bc3b4d0eff11548f5df4bce0f64ac0a", Pod:"coredns-66bc5c9577-dvmpd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0686454bc7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.151 [INFO][4964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.151 [INFO][4964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" iface="eth0" netns="" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.151 [INFO][4964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.151 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.178 [INFO][4971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.178 [INFO][4971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.178 [INFO][4971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.186 [WARNING][4971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.187 [INFO][4971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" HandleID="k8s-pod-network.112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--dvmpd-eth0" Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.188 [INFO][4971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.192758 containerd[1471]: 2025-11-01 00:18:26.191 [INFO][4964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327" Nov 1 00:18:26.194680 containerd[1471]: time="2025-11-01T00:18:26.193334594Z" level=info msg="TearDown network for sandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" successfully" Nov 1 00:18:26.197583 containerd[1471]: time="2025-11-01T00:18:26.197430158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:26.197583 containerd[1471]: time="2025-11-01T00:18:26.197529286Z" level=info msg="RemovePodSandbox \"112d7353e776577560fa36f107a0b7128ef6d6c19a3aad3210f7a5a41f09f327\" returns successfully" Nov 1 00:18:26.198494 containerd[1471]: time="2025-11-01T00:18:26.198437300Z" level=info msg="StopPodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\"" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.241 [WARNING][4985] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b8210a7f-2ccf-40c6-8962-23acfca85626", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49", Pod:"coredns-66bc5c9577-xzrmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aec64c5eb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.241 [INFO][4985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.241 [INFO][4985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" iface="eth0" netns="" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.241 [INFO][4985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.241 [INFO][4985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.267 [INFO][4992] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.267 [INFO][4992] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.267 [INFO][4992] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.274 [WARNING][4992] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.274 [INFO][4992] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.277 [INFO][4992] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.281378 containerd[1471]: 2025-11-01 00:18:26.279 [INFO][4985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.281378 containerd[1471]: time="2025-11-01T00:18:26.281321981Z" level=info msg="TearDown network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" successfully" Nov 1 00:18:26.281378 containerd[1471]: time="2025-11-01T00:18:26.281364683Z" level=info msg="StopPodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" returns successfully" Nov 1 00:18:26.283050 containerd[1471]: time="2025-11-01T00:18:26.282701307Z" level=info msg="RemovePodSandbox for \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\"" Nov 1 00:18:26.283050 containerd[1471]: time="2025-11-01T00:18:26.282736507Z" level=info msg="Forcibly stopping sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\"" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.334 [WARNING][5006] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b8210a7f-2ccf-40c6-8962-23acfca85626", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"ca02b97de4afbf51f879a177507cf3dd5193417078012828dc7ce55fd2f1ec49", Pod:"coredns-66bc5c9577-xzrmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aec64c5eb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.334 [INFO][5006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.334 [INFO][5006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" iface="eth0" netns="" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.334 [INFO][5006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.334 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.361 [INFO][5013] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.362 [INFO][5013] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.362 [INFO][5013] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.370 [WARNING][5013] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.370 [INFO][5013] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" HandleID="k8s-pod-network.6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Workload="ci--4081.3.6--n--62dab69cc5-k8s-coredns--66bc5c9577--xzrmj-eth0" Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.372 [INFO][5013] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.377025 containerd[1471]: 2025-11-01 00:18:26.374 [INFO][5006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2" Nov 1 00:18:26.377025 containerd[1471]: time="2025-11-01T00:18:26.376985931Z" level=info msg="TearDown network for sandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" successfully" Nov 1 00:18:26.379842 containerd[1471]: time="2025-11-01T00:18:26.379797280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:26.380057 containerd[1471]: time="2025-11-01T00:18:26.379864016Z" level=info msg="RemovePodSandbox \"6b9ad9b9d742c5b0b1e56bbc03d0fe6cc560369f6897af8f30b43bb250049bc2\" returns successfully" Nov 1 00:18:26.380869 containerd[1471]: time="2025-11-01T00:18:26.380748548Z" level=info msg="StopPodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\"" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.422 [WARNING][5027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b7c3b2ca-b43d-49de-8f54-e296a887af33", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd", Pod:"goldmane-7c778bb748-f2w9v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d9a0b4878f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.422 [INFO][5027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.422 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" iface="eth0" netns="" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.422 [INFO][5027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.422 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.449 [INFO][5034] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.449 [INFO][5034] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.449 [INFO][5034] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.456 [WARNING][5034] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.456 [INFO][5034] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.458 [INFO][5034] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.463929 containerd[1471]: 2025-11-01 00:18:26.461 [INFO][5027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.465182 containerd[1471]: time="2025-11-01T00:18:26.464773041Z" level=info msg="TearDown network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" successfully" Nov 1 00:18:26.465182 containerd[1471]: time="2025-11-01T00:18:26.464806765Z" level=info msg="StopPodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" returns successfully" Nov 1 00:18:26.465508 containerd[1471]: time="2025-11-01T00:18:26.465466950Z" level=info msg="RemovePodSandbox for \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\"" Nov 1 00:18:26.465599 containerd[1471]: time="2025-11-01T00:18:26.465513915Z" level=info msg="Forcibly stopping sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\"" Nov 1 00:18:26.526404 containerd[1471]: time="2025-11-01T00:18:26.526361503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.511 [WARNING][5048] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b7c3b2ca-b43d-49de-8f54-e296a887af33", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"b9478aa4a41490900c0b11ab95ec8fdd90fae710cdc430fda66fadecf8ae13fd", Pod:"goldmane-7c778bb748-f2w9v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d9a0b4878f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.511 [INFO][5048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.511 [INFO][5048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" iface="eth0" netns="" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.511 [INFO][5048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.511 [INFO][5048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.545 [INFO][5055] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.545 [INFO][5055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.545 [INFO][5055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.563 [WARNING][5055] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.563 [INFO][5055] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" HandleID="k8s-pod-network.70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Workload="ci--4081.3.6--n--62dab69cc5-k8s-goldmane--7c778bb748--f2w9v-eth0" Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.567 [INFO][5055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.571717 containerd[1471]: 2025-11-01 00:18:26.569 [INFO][5048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97" Nov 1 00:18:26.571717 containerd[1471]: time="2025-11-01T00:18:26.571163280Z" level=info msg="TearDown network for sandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" successfully" Nov 1 00:18:26.575507 containerd[1471]: time="2025-11-01T00:18:26.575216012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:26.575507 containerd[1471]: time="2025-11-01T00:18:26.575281272Z" level=info msg="RemovePodSandbox \"70699cd606b894abf044e786532fa92064d51fb33ba23dd3d4d37a9815e85e97\" returns successfully" Nov 1 00:18:26.576036 containerd[1471]: time="2025-11-01T00:18:26.576007281Z" level=info msg="StopPodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\"" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.627 [WARNING][5069] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.627 [INFO][5069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.627 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" iface="eth0" netns="" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.627 [INFO][5069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.627 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.655 [INFO][5076] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.655 [INFO][5076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.655 [INFO][5076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.664 [WARNING][5076] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.664 [INFO][5076] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.666 [INFO][5076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.669962 containerd[1471]: 2025-11-01 00:18:26.667 [INFO][5069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.671900 containerd[1471]: time="2025-11-01T00:18:26.669948526Z" level=info msg="TearDown network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" successfully" Nov 1 00:18:26.671900 containerd[1471]: time="2025-11-01T00:18:26.669977363Z" level=info msg="StopPodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" returns successfully" Nov 1 00:18:26.671900 containerd[1471]: time="2025-11-01T00:18:26.671059671Z" level=info msg="RemovePodSandbox for \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\"" Nov 1 00:18:26.671900 containerd[1471]: time="2025-11-01T00:18:26.671095377Z" level=info msg="Forcibly stopping sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\"" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.713 [WARNING][5090] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" WorkloadEndpoint="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.713 [INFO][5090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.713 [INFO][5090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" iface="eth0" netns="" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.713 [INFO][5090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.713 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.740 [INFO][5097] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.740 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.740 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.747 [WARNING][5097] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.747 [INFO][5097] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" HandleID="k8s-pod-network.e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Workload="ci--4081.3.6--n--62dab69cc5-k8s-whisker--9d8cbc64d--vpv5g-eth0" Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.749 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.753337 containerd[1471]: 2025-11-01 00:18:26.751 [INFO][5090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d" Nov 1 00:18:26.753792 containerd[1471]: time="2025-11-01T00:18:26.753432813Z" level=info msg="TearDown network for sandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" successfully" Nov 1 00:18:26.756915 containerd[1471]: time="2025-11-01T00:18:26.756844339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:26.757110 containerd[1471]: time="2025-11-01T00:18:26.756954576Z" level=info msg="RemovePodSandbox \"e30362b090515654c1fb869d13c5aba82a703f4af32e5128279c45ff6a28653d\" returns successfully" Nov 1 00:18:26.757719 containerd[1471]: time="2025-11-01T00:18:26.757680256Z" level=info msg="StopPodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\"" Nov 1 00:18:26.848805 containerd[1471]: time="2025-11-01T00:18:26.848754914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:26.849872 containerd[1471]: time="2025-11-01T00:18:26.849815607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:26.850970 containerd[1471]: time="2025-11-01T00:18:26.849901029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:26.851036 kubelet[2502]: E1101 00:18:26.850198 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:26.851036 kubelet[2502]: E1101 00:18:26.850250 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:26.851036 kubelet[2502]: E1101 00:18:26.850342 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849f5b77d5-xs24n_calico-apiserver(385266d7-6e64-4f3b-97e7-b399fc11fb3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:26.851036 kubelet[2502]: E1101 00:18:26.850379 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.809 [WARNING][5111] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"385266d7-6e64-4f3b-97e7-b399fc11fb3c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc", Pod:"calico-apiserver-849f5b77d5-xs24n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ed7c19b9ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.810 [INFO][5111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.810 [INFO][5111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" iface="eth0" netns="" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.810 [INFO][5111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.810 [INFO][5111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.837 [INFO][5118] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.838 [INFO][5118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.838 [INFO][5118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.849 [WARNING][5118] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.849 [INFO][5118] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.852 [INFO][5118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.859617 containerd[1471]: 2025-11-01 00:18:26.856 [INFO][5111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.860621 containerd[1471]: time="2025-11-01T00:18:26.859722392Z" level=info msg="TearDown network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" successfully" Nov 1 00:18:26.860621 containerd[1471]: time="2025-11-01T00:18:26.859749292Z" level=info msg="StopPodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" returns successfully" Nov 1 00:18:26.862955 containerd[1471]: time="2025-11-01T00:18:26.861973954Z" level=info msg="RemovePodSandbox for \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\"" Nov 1 00:18:26.862955 containerd[1471]: time="2025-11-01T00:18:26.862058070Z" level=info msg="Forcibly stopping sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\"" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.919 [WARNING][5138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0", GenerateName:"calico-apiserver-849f5b77d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"385266d7-6e64-4f3b-97e7-b399fc11fb3c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849f5b77d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"8c3244f313b6482002efc25562118b7440b879192fb2a9721a6fabfbd289c6bc", Pod:"calico-apiserver-849f5b77d5-xs24n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ed7c19b9ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.919 [INFO][5138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.919 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" iface="eth0" netns="" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.919 [INFO][5138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.919 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.950 [INFO][5145] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.951 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.951 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.958 [WARNING][5145] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.958 [INFO][5145] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" HandleID="k8s-pod-network.8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--apiserver--849f5b77d5--xs24n-eth0" Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.960 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:26.965142 containerd[1471]: 2025-11-01 00:18:26.962 [INFO][5138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32" Nov 1 00:18:26.965142 containerd[1471]: time="2025-11-01T00:18:26.965082678Z" level=info msg="TearDown network for sandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" successfully" Nov 1 00:18:26.975883 containerd[1471]: time="2025-11-01T00:18:26.975811636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:26.975883 containerd[1471]: time="2025-11-01T00:18:26.975892576Z" level=info msg="RemovePodSandbox \"8771351a2b4670fd1f4bfd6dbf57da9ac1dfabbbdf01191ce27115d6e34f3d32\" returns successfully" Nov 1 00:18:26.976616 containerd[1471]: time="2025-11-01T00:18:26.976518046Z" level=info msg="StopPodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\"" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.023 [WARNING][5160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0", GenerateName:"calico-kube-controllers-fc444b969-", Namespace:"calico-system", SelfLink:"", UID:"88514d97-6a8a-4349-b2ae-0a411d3ab2a9", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fc444b969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5", Pod:"calico-kube-controllers-fc444b969-zmtxl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f2623f4b5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.023 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.023 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" iface="eth0" netns="" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.023 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.024 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.050 [INFO][5167] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.050 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.050 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.057 [WARNING][5167] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.057 [INFO][5167] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.060 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:27.067727 containerd[1471]: 2025-11-01 00:18:27.065 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.068277 containerd[1471]: time="2025-11-01T00:18:27.067792847Z" level=info msg="TearDown network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" successfully" Nov 1 00:18:27.068277 containerd[1471]: time="2025-11-01T00:18:27.067834653Z" level=info msg="StopPodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" returns successfully" Nov 1 00:18:27.069273 containerd[1471]: time="2025-11-01T00:18:27.069210522Z" level=info msg="RemovePodSandbox for \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\"" Nov 1 00:18:27.069273 containerd[1471]: time="2025-11-01T00:18:27.069269083Z" level=info msg="Forcibly stopping sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\"" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.138 [WARNING][5182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0", GenerateName:"calico-kube-controllers-fc444b969-", Namespace:"calico-system", SelfLink:"", UID:"88514d97-6a8a-4349-b2ae-0a411d3ab2a9", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fc444b969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-62dab69cc5", ContainerID:"dce58137d19f73061b0fe23d29e49c25f05a19fbfbaaa0ffd3acec2ebfc576e5", Pod:"calico-kube-controllers-fc444b969-zmtxl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f2623f4b5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.138 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.138 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" iface="eth0" netns="" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.138 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.138 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.167 [INFO][5189] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.167 [INFO][5189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.167 [INFO][5189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.177 [WARNING][5189] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.177 [INFO][5189] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" HandleID="k8s-pod-network.19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Workload="ci--4081.3.6--n--62dab69cc5-k8s-calico--kube--controllers--fc444b969--zmtxl-eth0" Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.180 [INFO][5189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:27.186742 containerd[1471]: 2025-11-01 00:18:27.183 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64" Nov 1 00:18:27.186742 containerd[1471]: time="2025-11-01T00:18:27.186440032Z" level=info msg="TearDown network for sandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" successfully" Nov 1 00:18:27.191092 containerd[1471]: time="2025-11-01T00:18:27.190875549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:27.191092 containerd[1471]: time="2025-11-01T00:18:27.190965185Z" level=info msg="RemovePodSandbox \"19f0903cd35f6b40e3ead07255b3bc433830cf9187476ddc90803937439cbf64\" returns successfully" Nov 1 00:18:29.525952 kubelet[2502]: E1101 00:18:29.525871 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-856547cfbb-zlkzz" podUID="927c6d85-0eb1-4656-97ff-085d71b01e8e" Nov 1 00:18:29.830325 systemd[1]: Started sshd@11-146.190.126.63:22-139.178.68.195:60848.service - OpenSSH per-connection server daemon (139.178.68.195:60848). Nov 1 00:18:29.907675 sshd[5197]: Accepted publickey for core from 139.178.68.195 port 60848 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:29.909370 sshd[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:29.915185 systemd-logind[1445]: New session 10 of user core. Nov 1 00:18:29.919926 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:18:30.090532 sshd[5197]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:30.103350 systemd[1]: sshd@11-146.190.126.63:22-139.178.68.195:60848.service: Deactivated successfully. Nov 1 00:18:30.106565 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:18:30.110737 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:18:30.114500 systemd[1]: Started sshd@12-146.190.126.63:22-139.178.68.195:60856.service - OpenSSH per-connection server daemon (139.178.68.195:60856). Nov 1 00:18:30.116197 systemd-logind[1445]: Removed session 10. Nov 1 00:18:30.177419 sshd[5210]: Accepted publickey for core from 139.178.68.195 port 60856 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:30.179716 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:30.186821 systemd-logind[1445]: New session 11 of user core. Nov 1 00:18:30.197985 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:18:30.401555 sshd[5210]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:30.410285 systemd[1]: sshd@12-146.190.126.63:22-139.178.68.195:60856.service: Deactivated successfully. Nov 1 00:18:30.414102 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:18:30.418177 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:18:30.424034 systemd[1]: Started sshd@13-146.190.126.63:22-139.178.68.195:60858.service - OpenSSH per-connection server daemon (139.178.68.195:60858). Nov 1 00:18:30.436269 systemd-logind[1445]: Removed session 11. Nov 1 00:18:30.496987 sshd[5220]: Accepted publickey for core from 139.178.68.195 port 60858 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:30.499524 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:30.506719 systemd-logind[1445]: New session 12 of user core. Nov 1 00:18:30.512042 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:18:30.663385 sshd[5220]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:30.668273 systemd[1]: sshd@13-146.190.126.63:22-139.178.68.195:60858.service: Deactivated successfully. Nov 1 00:18:30.671948 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:18:30.673113 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:18:30.674313 systemd-logind[1445]: Removed session 12. Nov 1 00:18:34.525578 kubelet[2502]: E1101 00:18:34.524256 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:35.526876 kubelet[2502]: E1101 00:18:35.526778 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:18:35.687201 systemd[1]: Started sshd@14-146.190.126.63:22-139.178.68.195:55524.service - OpenSSH per-connection server daemon (139.178.68.195:55524). Nov 1 00:18:35.741669 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 55524 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:35.744144 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:35.753035 systemd-logind[1445]: New session 13 of user core. Nov 1 00:18:35.758912 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:18:35.909591 sshd[5240]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:35.916186 systemd[1]: sshd@14-146.190.126.63:22-139.178.68.195:55524.service: Deactivated successfully. Nov 1 00:18:35.918480 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:18:35.919390 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:18:35.920672 systemd-logind[1445]: Removed session 13. Nov 1 00:18:37.528355 kubelet[2502]: E1101 00:18:37.528121 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:18:38.527106 kubelet[2502]: E1101 00:18:38.526911 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:39.525550 kubelet[2502]: E1101 00:18:39.524186 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:39.526434 kubelet[2502]: E1101 00:18:39.526067 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:18:40.526179 kubelet[2502]: E1101 00:18:40.525733 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:18:40.935008 systemd[1]: Started sshd@15-146.190.126.63:22-139.178.68.195:55534.service - OpenSSH per-connection server daemon (139.178.68.195:55534). Nov 1 00:18:41.020660 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 55534 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:41.024089 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:41.030241 systemd-logind[1445]: New session 14 of user core. Nov 1 00:18:41.034890 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:18:41.194284 sshd[5281]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:41.200275 systemd[1]: sshd@15-146.190.126.63:22-139.178.68.195:55534.service: Deactivated successfully. Nov 1 00:18:41.202531 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:18:41.203737 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:18:41.205246 systemd-logind[1445]: Removed session 14. Nov 1 00:18:44.526283 containerd[1471]: time="2025-11-01T00:18:44.525961185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:18:44.865937 containerd[1471]: time="2025-11-01T00:18:44.865767866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:44.866984 containerd[1471]: time="2025-11-01T00:18:44.866910278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:18:44.867336 containerd[1471]: time="2025-11-01T00:18:44.867012603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:18:44.868270 kubelet[2502]: E1101 00:18:44.868069 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:44.868270 kubelet[2502]: E1101 00:18:44.868125 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:44.868270 kubelet[2502]: E1101 00:18:44.868237 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-856547cfbb-zlkzz_calico-system(927c6d85-0eb1-4656-97ff-085d71b01e8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:44.871958 containerd[1471]: time="2025-11-01T00:18:44.871663271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:18:45.172375 containerd[1471]: time="2025-11-01T00:18:45.172232658Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:45.173720 containerd[1471]: time="2025-11-01T00:18:45.173476915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:18:45.173720 containerd[1471]: time="2025-11-01T00:18:45.173526176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:45.174038 kubelet[2502]: E1101 00:18:45.173714 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:45.174038 kubelet[2502]: E1101 00:18:45.173758 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:45.174038 kubelet[2502]: E1101 00:18:45.173835 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-856547cfbb-zlkzz_calico-system(927c6d85-0eb1-4656-97ff-085d71b01e8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:45.174958 kubelet[2502]: E1101 00:18:45.173879 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-856547cfbb-zlkzz" podUID="927c6d85-0eb1-4656-97ff-085d71b01e8e" Nov 1 00:18:46.218029 systemd[1]: Started sshd@16-146.190.126.63:22-139.178.68.195:40764.service - OpenSSH per-connection server daemon (139.178.68.195:40764). Nov 1 00:18:46.254358 sshd[5297]: Accepted publickey for core from 139.178.68.195 port 40764 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:46.256304 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:46.261593 systemd-logind[1445]: New session 15 of user core. Nov 1 00:18:46.265939 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:18:46.418641 sshd[5297]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:46.423170 systemd[1]: sshd@16-146.190.126.63:22-139.178.68.195:40764.service: Deactivated successfully. Nov 1 00:18:46.425310 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:18:46.426431 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:18:46.428106 systemd-logind[1445]: Removed session 15. Nov 1 00:18:46.526837 containerd[1471]: time="2025-11-01T00:18:46.526337228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:18:46.830852 containerd[1471]: time="2025-11-01T00:18:46.830700850Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:46.831542 containerd[1471]: time="2025-11-01T00:18:46.831487234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:18:46.831800 containerd[1471]: time="2025-11-01T00:18:46.831570780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:46.831859 kubelet[2502]: E1101 00:18:46.831812 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:46.832169 kubelet[2502]: E1101 00:18:46.831866 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:46.833058 kubelet[2502]: E1101 00:18:46.832471 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f2w9v_calico-system(b7c3b2ca-b43d-49de-8f54-e296a887af33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:46.833058 kubelet[2502]: E1101 00:18:46.832522 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:18:49.527512 containerd[1471]: time="2025-11-01T00:18:49.527393416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:49.846043 containerd[1471]: time="2025-11-01T00:18:49.845707964Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:49.846854 containerd[1471]: time="2025-11-01T00:18:49.846721861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:49.846854 containerd[1471]: time="2025-11-01T00:18:49.846745849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:49.847290 kubelet[2502]: E1101 00:18:49.847223 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:49.847684 kubelet[2502]: E1101 00:18:49.847299 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:49.847684 kubelet[2502]: E1101 00:18:49.847600 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849f5b77d5-72jqw_calico-apiserver(158163d5-4372-43a9-8b56-d89943f06f09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:49.847792 kubelet[2502]: E1101 00:18:49.847683 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:18:49.848317 containerd[1471]: time="2025-11-01T00:18:49.848107522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:18:50.156428 containerd[1471]: time="2025-11-01T00:18:50.156182000Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:50.157520 containerd[1471]: time="2025-11-01T00:18:50.157348856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:18:50.157520 containerd[1471]: time="2025-11-01T00:18:50.157452042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:18:50.158801 kubelet[2502]: E1101 00:18:50.157865 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:50.158801 kubelet[2502]: E1101 00:18:50.157992 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:50.158801 kubelet[2502]: E1101 00:18:50.158100 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:50.160367 containerd[1471]: time="2025-11-01T00:18:50.160055906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:18:50.464828 containerd[1471]: time="2025-11-01T00:18:50.464652845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:50.466489 containerd[1471]: time="2025-11-01T00:18:50.466201860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:18:50.466489 containerd[1471]: time="2025-11-01T00:18:50.466310995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:18:50.466806 kubelet[2502]: E1101 00:18:50.466540 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:50.466806 kubelet[2502]: E1101 00:18:50.466602 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:50.466806 kubelet[2502]: E1101 00:18:50.466707 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ntlzm_calico-system(fcdc505d-5cce-492c-9f5d-b001efaf66ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:50.466940 kubelet[2502]: E1101 00:18:50.466750 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:18:51.437026 systemd[1]: Started sshd@17-146.190.126.63:22-139.178.68.195:40770.service - OpenSSH per-connection server daemon (139.178.68.195:40770). Nov 1 00:18:51.506898 sshd[5316]: Accepted publickey for core from 139.178.68.195 port 40770 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:51.509066 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:51.521789 systemd-logind[1445]: New session 16 of user core. Nov 1 00:18:51.527148 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:18:51.528741 containerd[1471]: time="2025-11-01T00:18:51.528236615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:51.755619 sshd[5316]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:51.772537 systemd[1]: sshd@17-146.190.126.63:22-139.178.68.195:40770.service: Deactivated successfully. Nov 1 00:18:51.776352 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:18:51.780181 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:18:51.789432 systemd[1]: Started sshd@18-146.190.126.63:22-139.178.68.195:40780.service - OpenSSH per-connection server daemon (139.178.68.195:40780). Nov 1 00:18:51.792271 systemd-logind[1445]: Removed session 16. Nov 1 00:18:51.855995 sshd[5329]: Accepted publickey for core from 139.178.68.195 port 40780 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:51.859052 containerd[1471]: time="2025-11-01T00:18:51.858838449Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:51.859247 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:51.861622 containerd[1471]: time="2025-11-01T00:18:51.861458106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:51.861622 containerd[1471]: time="2025-11-01T00:18:51.861525695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:51.862825 kubelet[2502]: E1101 00:18:51.862683 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:51.863864 kubelet[2502]: E1101 00:18:51.862862 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:51.863912 kubelet[2502]: E1101 00:18:51.863870 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849f5b77d5-xs24n_calico-apiserver(385266d7-6e64-4f3b-97e7-b399fc11fb3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:51.864897 kubelet[2502]: E1101 00:18:51.863915 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:18:51.870242 systemd-logind[1445]: New session 17 of user core. Nov 1 00:18:51.874848 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:18:52.180299 sshd[5329]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:52.191316 systemd[1]: sshd@18-146.190.126.63:22-139.178.68.195:40780.service: Deactivated successfully. Nov 1 00:18:52.195198 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:18:52.199830 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:18:52.208008 systemd[1]: Started sshd@19-146.190.126.63:22-139.178.68.195:40784.service - OpenSSH per-connection server daemon (139.178.68.195:40784). Nov 1 00:18:52.210547 systemd-logind[1445]: Removed session 17. Nov 1 00:18:52.255441 sshd[5340]: Accepted publickey for core from 139.178.68.195 port 40784 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:52.257058 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:52.263854 systemd-logind[1445]: New session 18 of user core. Nov 1 00:18:52.269920 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:18:52.525830 containerd[1471]: time="2025-11-01T00:18:52.525776078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:18:52.844187 containerd[1471]: time="2025-11-01T00:18:52.844031839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:52.845765 containerd[1471]: time="2025-11-01T00:18:52.845709900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:18:52.845886 containerd[1471]: time="2025-11-01T00:18:52.845828153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:52.846825 kubelet[2502]: E1101 00:18:52.846077 2502 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:52.846825 kubelet[2502]: E1101 00:18:52.846746 2502 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:52.847658 kubelet[2502]: E1101 00:18:52.847049 2502 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-fc444b969-zmtxl_calico-system(88514d97-6a8a-4349-b2ae-0a411d3ab2a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:52.847658 kubelet[2502]: E1101 00:18:52.847107 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:18:53.114415 sshd[5340]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:53.126660 systemd[1]: sshd@19-146.190.126.63:22-139.178.68.195:40784.service: Deactivated successfully. Nov 1 00:18:53.133601 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:18:53.140497 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:18:53.157243 systemd[1]: Started sshd@20-146.190.126.63:22-139.178.68.195:53618.service - OpenSSH per-connection server daemon (139.178.68.195:53618). Nov 1 00:18:53.161462 systemd-logind[1445]: Removed session 18. Nov 1 00:18:53.225499 sshd[5358]: Accepted publickey for core from 139.178.68.195 port 53618 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:53.226167 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:53.233916 systemd-logind[1445]: New session 19 of user core. Nov 1 00:18:53.236848 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:18:53.615428 sshd[5358]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:53.630243 systemd[1]: sshd@20-146.190.126.63:22-139.178.68.195:53618.service: Deactivated successfully. Nov 1 00:18:53.633954 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:18:53.636711 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:18:53.647372 systemd[1]: Started sshd@21-146.190.126.63:22-139.178.68.195:53632.service - OpenSSH per-connection server daemon (139.178.68.195:53632). Nov 1 00:18:53.654077 systemd-logind[1445]: Removed session 19. Nov 1 00:18:53.704841 sshd[5369]: Accepted publickey for core from 139.178.68.195 port 53632 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:53.706724 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:53.711862 systemd-logind[1445]: New session 20 of user core. Nov 1 00:18:53.718894 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:18:53.861920 sshd[5369]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:53.867952 systemd[1]: sshd@21-146.190.126.63:22-139.178.68.195:53632.service: Deactivated successfully. Nov 1 00:18:53.871218 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:18:53.873120 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:18:53.874331 systemd-logind[1445]: Removed session 20. Nov 1 00:18:55.524771 kubelet[2502]: E1101 00:18:55.524690 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:18:58.528289 kubelet[2502]: E1101 00:18:58.528225 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-856547cfbb-zlkzz" podUID="927c6d85-0eb1-4656-97ff-085d71b01e8e" Nov 1 00:18:58.880985 systemd[1]: Started sshd@22-146.190.126.63:22-139.178.68.195:53642.service - OpenSSH per-connection server daemon (139.178.68.195:53642). Nov 1 00:18:58.920823 sshd[5386]: Accepted publickey for core from 139.178.68.195 port 53642 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:18:58.922012 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:58.927721 systemd-logind[1445]: New session 21 of user core. Nov 1 00:18:58.931802 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:18:59.068538 sshd[5386]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:59.072857 systemd[1]: sshd@22-146.190.126.63:22-139.178.68.195:53642.service: Deactivated successfully. Nov 1 00:18:59.076159 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:18:59.077175 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:18:59.078201 systemd-logind[1445]: Removed session 21. Nov 1 00:19:02.525377 kubelet[2502]: E1101 00:19:02.525325 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:19:02.527808 kubelet[2502]: E1101 00:19:02.526209 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:19:02.528339 kubelet[2502]: E1101 00:19:02.528278 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f2w9v" podUID="b7c3b2ca-b43d-49de-8f54-e296a887af33" Nov 1 00:19:02.528924 kubelet[2502]: E1101 00:19:02.528765 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-72jqw" podUID="158163d5-4372-43a9-8b56-d89943f06f09" Nov 1 00:19:03.528927 kubelet[2502]: E1101 00:19:03.528690 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-fc444b969-zmtxl" podUID="88514d97-6a8a-4349-b2ae-0a411d3ab2a9" Nov 1 00:19:04.090997 systemd[1]: Started sshd@23-146.190.126.63:22-139.178.68.195:44244.service - OpenSSH per-connection server daemon (139.178.68.195:44244). Nov 1 00:19:04.152125 sshd[5403]: Accepted publickey for core from 139.178.68.195 port 44244 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:19:04.153439 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:19:04.158757 systemd-logind[1445]: New session 22 of user core. Nov 1 00:19:04.162847 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:19:04.319515 sshd[5403]: pam_unix(sshd:session): session closed for user core Nov 1 00:19:04.326888 systemd[1]: sshd@23-146.190.126.63:22-139.178.68.195:44244.service: Deactivated successfully. Nov 1 00:19:04.330099 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:19:04.331481 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:19:04.332869 systemd-logind[1445]: Removed session 22. Nov 1 00:19:05.527010 kubelet[2502]: E1101 00:19:05.526762 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849f5b77d5-xs24n" podUID="385266d7-6e64-4f3b-97e7-b399fc11fb3c" Nov 1 00:19:05.528800 kubelet[2502]: E1101 00:19:05.528714 2502 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ntlzm" podUID="fcdc505d-5cce-492c-9f5d-b001efaf66ff" Nov 1 00:19:07.259517 kubelet[2502]: E1101 00:19:07.259459 2502 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:19:09.335775 systemd[1]: Started sshd@24-146.190.126.63:22-139.178.68.195:44254.service - OpenSSH per-connection server daemon (139.178.68.195:44254). Nov 1 00:19:09.406042 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 44254 ssh2: RSA SHA256:X3PH5M7wtT7ziXwlN9LJ4olAvRQtEF+vhmq03/uLNAE Nov 1 00:19:09.409387 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:19:09.419808 systemd-logind[1445]: New session 23 of user core. Nov 1 00:19:09.423909 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:19:09.713039 sshd[5438]: pam_unix(sshd:session): session closed for user core Nov 1 00:19:09.722007 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:19:09.722879 systemd[1]: sshd@24-146.190.126.63:22-139.178.68.195:44254.service: Deactivated successfully. Nov 1 00:19:09.725718 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:19:09.728356 systemd-logind[1445]: Removed session 23.