Sep 13 00:23:25.899318 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:23:25.899354 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:23:25.899372 kernel: BIOS-provided physical RAM map: Sep 13 00:23:25.899383 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:23:25.899390 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:23:25.899396 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:23:25.899404 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:23:25.899411 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:23:25.899418 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:23:25.899427 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:23:25.899454 kernel: NX (Execute Disable) protection: active Sep 13 00:23:25.899461 kernel: APIC: Static calls initialized Sep 13 00:23:25.899472 kernel: SMBIOS 2.8 present. Sep 13 00:23:25.899479 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:23:25.899488 kernel: Hypervisor detected: KVM Sep 13 00:23:25.899500 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:23:25.899511 kernel: kvm-clock: using sched offset of 3064890414 cycles Sep 13 00:23:25.899520 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:23:25.899528 kernel: tsc: Detected 2494.140 MHz processor Sep 13 00:23:25.899536 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:23:25.899544 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:23:25.899552 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:23:25.899560 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:23:25.899568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:23:25.899579 kernel: ACPI: Early table checksum verification disabled Sep 13 00:23:25.899587 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:23:25.899595 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899603 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899611 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899618 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:23:25.899626 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899633 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899641 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899651 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:25.899659 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:23:25.899667 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:23:25.899675 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:23:25.899682 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:23:25.899690 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:23:25.899698 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:23:25.899710 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:23:25.899720 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:23:25.899729 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:23:25.899737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:23:25.899745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:23:25.899755 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:23:25.899764 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:23:25.899775 kernel: Zone ranges: Sep 13 00:23:25.899783 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:23:25.899791 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:23:25.899800 kernel: Normal empty Sep 13 00:23:25.899808 kernel: Movable zone start for each node Sep 13 00:23:25.899816 kernel: Early memory node ranges Sep 13 00:23:25.899824 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:23:25.899832 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:23:25.899840 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:23:25.899851 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:23:25.899859 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:23:25.899874 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:23:25.899887 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:23:25.899898 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:23:25.899910 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:23:25.899922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:23:25.899933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:23:25.899944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:23:25.899960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:23:25.900367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:23:25.900379 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:23:25.900387 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:23:25.900396 kernel: TSC deadline timer available Sep 13 00:23:25.900404 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:23:25.900412 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:23:25.900420 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:23:25.900443 kernel: Booting paravirtualized kernel on KVM Sep 13 00:23:25.900452 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:23:25.900465 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:23:25.900473 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:23:25.900481 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:23:25.900490 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:23:25.900498 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:23:25.900508 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:23:25.900517 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:23:25.900525 kernel: random: crng init done Sep 13 00:23:25.900536 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:23:25.900545 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:23:25.900553 kernel: Fallback order for Node 0: 0 Sep 13 00:23:25.900561 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:23:25.900569 kernel: Policy zone: DMA32 Sep 13 00:23:25.900577 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:23:25.900586 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:23:25.900594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:23:25.900606 kernel: Kernel/User page tables isolation: enabled Sep 13 00:23:25.900615 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:23:25.900623 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:23:25.900631 kernel: Dynamic Preempt: voluntary Sep 13 00:23:25.900639 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:23:25.900648 kernel: rcu: RCU event tracing is enabled. Sep 13 00:23:25.900656 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:23:25.900665 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:23:25.900673 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:23:25.900681 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:23:25.900692 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:23:25.900700 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:23:25.900708 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:23:25.900717 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:23:25.900728 kernel: Console: colour VGA+ 80x25 Sep 13 00:23:25.900737 kernel: printk: console [tty0] enabled Sep 13 00:23:25.900745 kernel: printk: console [ttyS0] enabled Sep 13 00:23:25.900753 kernel: ACPI: Core revision 20230628 Sep 13 00:23:25.900761 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:23:25.900772 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:23:25.900780 kernel: x2apic enabled Sep 13 00:23:25.900789 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:23:25.900797 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:23:25.900805 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:23:25.900813 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 13 00:23:25.900821 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:23:25.900830 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:23:25.900849 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:23:25.900858 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:23:25.900867 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:23:25.900878 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:23:25.900887 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:23:25.900899 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:23:25.900915 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:23:25.900926 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:23:25.900939 kernel: active return thunk: its_return_thunk Sep 13 00:23:25.900959 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:23:25.900972 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:23:25.902453 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:23:25.902479 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:23:25.902489 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:23:25.902499 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:23:25.902508 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:23:25.902517 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:23:25.902532 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:23:25.902540 kernel: landlock: Up and running. Sep 13 00:23:25.902549 kernel: SELinux: Initializing. Sep 13 00:23:25.902558 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:23:25.902567 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:23:25.902577 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:23:25.902586 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:23:25.902595 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:23:25.902603 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:23:25.902615 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:23:25.902624 kernel: signal: max sigframe size: 1776 Sep 13 00:23:25.902633 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:23:25.902643 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:23:25.902652 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:23:25.902668 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:23:25.902680 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:23:25.902692 kernel: .... node #0, CPUs: #1 Sep 13 00:23:25.902711 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:23:25.902728 kernel: smpboot: Max logical packages: 1 Sep 13 00:23:25.902741 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 13 00:23:25.902753 kernel: devtmpfs: initialized Sep 13 00:23:25.902768 kernel: x86/mm: Memory block size: 128MB Sep 13 00:23:25.902777 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:23:25.902786 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:23:25.902795 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:23:25.902804 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:23:25.902813 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:23:25.902825 kernel: audit: type=2000 audit(1757723004.486:1): state=initialized audit_enabled=0 res=1 Sep 13 00:23:25.902834 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:23:25.902842 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:23:25.902851 kernel: cpuidle: using governor menu Sep 13 00:23:25.902860 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:23:25.902868 kernel: dca service started, version 1.12.1 Sep 13 00:23:25.902877 kernel: PCI: Using configuration type 1 for base access Sep 13 00:23:25.902886 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:23:25.902895 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:23:25.902907 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:23:25.902915 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:23:25.902924 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:23:25.902933 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:23:25.902942 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:23:25.902950 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:23:25.902959 kernel: ACPI: Interpreter enabled Sep 13 00:23:25.902970 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:23:25.902986 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:23:25.903002 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:23:25.903014 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:23:25.903027 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:23:25.903040 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:23:25.903270 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:23:25.903382 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:23:25.904614 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:23:25.904641 kernel: acpiphp: Slot [3] registered Sep 13 00:23:25.904651 kernel: acpiphp: Slot [4] registered Sep 13 00:23:25.904660 kernel: acpiphp: Slot [5] registered Sep 13 00:23:25.904669 kernel: acpiphp: Slot [6] registered Sep 13 00:23:25.904678 kernel: acpiphp: Slot [7] registered Sep 13 00:23:25.904687 kernel: acpiphp: Slot [8] registered Sep 13 00:23:25.904695 kernel: acpiphp: Slot [9] registered Sep 13 00:23:25.904704 kernel: acpiphp: Slot [10] registered Sep 13 00:23:25.904713 kernel: acpiphp: Slot [11] registered Sep 13 00:23:25.904722 kernel: acpiphp: Slot [12] registered Sep 13 00:23:25.904734 kernel: acpiphp: Slot [13] registered Sep 13 00:23:25.904742 kernel: acpiphp: Slot [14] registered Sep 13 00:23:25.904751 kernel: acpiphp: Slot [15] registered Sep 13 00:23:25.904760 kernel: acpiphp: Slot [16] registered Sep 13 00:23:25.904768 kernel: acpiphp: Slot [17] registered Sep 13 00:23:25.904777 kernel: acpiphp: Slot [18] registered Sep 13 00:23:25.904786 kernel: acpiphp: Slot [19] registered Sep 13 00:23:25.904794 kernel: acpiphp: Slot [20] registered Sep 13 00:23:25.904803 kernel: acpiphp: Slot [21] registered Sep 13 00:23:25.904815 kernel: acpiphp: Slot [22] registered Sep 13 00:23:25.904823 kernel: acpiphp: Slot [23] registered Sep 13 00:23:25.904832 kernel: acpiphp: Slot [24] registered Sep 13 00:23:25.904840 kernel: acpiphp: Slot [25] registered Sep 13 00:23:25.904849 kernel: acpiphp: Slot [26] registered Sep 13 00:23:25.904858 kernel: acpiphp: Slot [27] registered Sep 13 00:23:25.904866 kernel: acpiphp: Slot [28] registered Sep 13 00:23:25.904875 kernel: acpiphp: Slot [29] registered Sep 13 00:23:25.904883 kernel: acpiphp: Slot [30] registered Sep 13 00:23:25.904892 kernel: acpiphp: Slot [31] registered Sep 13 00:23:25.904903 kernel: PCI host bridge to bus 0000:00 Sep 13 00:23:25.905019 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:23:25.905110 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:23:25.905198 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:23:25.905284 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:23:25.905478 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:23:25.905570 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:23:25.905714 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:23:25.905898 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:23:25.906016 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:23:25.906135 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:23:25.906283 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:23:25.906383 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:23:25.908621 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:23:25.908739 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:23:25.908863 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:23:25.908998 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:23:25.909112 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:23:25.909210 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:23:25.909328 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:23:25.910520 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:23:25.910647 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:23:25.910754 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:23:25.910862 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:23:25.910968 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:23:25.911070 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:23:25.911195 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:23:25.911292 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:23:25.911387 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:23:25.912558 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:23:25.912690 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:23:25.912789 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:23:25.912883 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:23:25.912983 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:23:25.913093 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:23:25.913249 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:23:25.913398 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:23:25.913527 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:23:25.913658 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:23:25.913763 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:23:25.913870 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:23:25.913976 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:23:25.914090 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:23:25.914189 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:23:25.914284 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:23:25.914377 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:23:25.917020 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:23:25.917144 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:23:25.917244 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:23:25.917256 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:23:25.917266 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:23:25.917275 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:23:25.917284 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:23:25.917293 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:23:25.917307 kernel: iommu: Default domain type: Translated Sep 13 00:23:25.917363 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:23:25.917377 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:23:25.917390 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:23:25.917400 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:23:25.917409 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:23:25.918638 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:23:25.918750 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:23:25.918854 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:23:25.918867 kernel: vgaarb: loaded Sep 13 00:23:25.918877 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:23:25.918886 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:23:25.918895 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:23:25.918904 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:23:25.918914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:23:25.918923 kernel: pnp: PnP ACPI init Sep 13 00:23:25.918932 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:23:25.918944 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:23:25.918953 kernel: NET: Registered PF_INET protocol family Sep 13 00:23:25.918962 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:23:25.918971 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:23:25.918980 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:23:25.918990 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:23:25.918998 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:23:25.919007 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:23:25.919016 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:23:25.919028 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:23:25.919037 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:23:25.919046 kernel: NET: Registered PF_XDP protocol family Sep 13 00:23:25.919140 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:23:25.919225 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:23:25.919309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:23:25.919392 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:23:25.920588 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:23:25.920715 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:23:25.920821 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:23:25.920835 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:23:25.920933 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 29349 usecs Sep 13 00:23:25.920945 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:23:25.920955 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:23:25.920964 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:23:25.920973 kernel: Initialise system trusted keyrings Sep 13 00:23:25.920982 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:23:25.920995 kernel: Key type asymmetric registered Sep 13 00:23:25.921004 kernel: Asymmetric key parser 'x509' registered Sep 13 00:23:25.921012 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:23:25.921021 kernel: io scheduler mq-deadline registered Sep 13 00:23:25.921030 kernel: io scheduler kyber registered Sep 13 00:23:25.921039 kernel: io scheduler bfq registered Sep 13 00:23:25.921048 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:23:25.921057 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:23:25.921065 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:23:25.921077 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:23:25.921086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:23:25.921094 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:23:25.921103 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:23:25.921112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:23:25.921120 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:23:25.921237 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:23:25.921251 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:23:25.921366 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:23:25.922523 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:23:25 UTC (1757723005) Sep 13 00:23:25.922628 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:23:25.922641 kernel: intel_pstate: CPU model not supported Sep 13 00:23:25.922651 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:23:25.922660 kernel: Segment Routing with IPv6 Sep 13 00:23:25.922669 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:23:25.922678 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:23:25.922692 kernel: Key type dns_resolver registered Sep 13 00:23:25.922701 kernel: IPI shorthand broadcast: enabled Sep 13 00:23:25.922710 kernel: sched_clock: Marking stable (877004523, 90117523)->(1059465945, -92343899) Sep 13 00:23:25.922719 kernel: registered taskstats version 1 Sep 13 00:23:25.922728 kernel: Loading compiled-in X.509 certificates Sep 13 00:23:25.922737 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:23:25.922746 kernel: Key type .fscrypt registered Sep 13 00:23:25.922755 kernel: Key type fscrypt-provisioning registered Sep 13 00:23:25.922764 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:23:25.922776 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:23:25.922785 kernel: ima: No architecture policies found Sep 13 00:23:25.922794 kernel: clk: Disabling unused clocks Sep 13 00:23:25.922803 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:23:25.922812 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:23:25.922839 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:23:25.922852 kernel: Run /init as init process Sep 13 00:23:25.922861 kernel: with arguments: Sep 13 00:23:25.922871 kernel: /init Sep 13 00:23:25.922882 kernel: with environment: Sep 13 00:23:25.922892 kernel: HOME=/ Sep 13 00:23:25.922901 kernel: TERM=linux Sep 13 00:23:25.922910 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:23:25.922923 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:23:25.922935 systemd[1]: Detected virtualization kvm. Sep 13 00:23:25.922950 systemd[1]: Detected architecture x86-64. Sep 13 00:23:25.922970 systemd[1]: Running in initrd. Sep 13 00:23:25.922986 systemd[1]: No hostname configured, using default hostname. Sep 13 00:23:25.922999 systemd[1]: Hostname set to . Sep 13 00:23:25.923012 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:23:25.923025 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:23:25.923039 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:25.923054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:25.923070 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:23:25.923084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:23:25.923098 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:23:25.923108 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:23:25.923119 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:23:25.923129 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:23:25.923139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:25.923149 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:25.923159 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:23:25.923172 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:23:25.923182 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:23:25.923195 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:23:25.923206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:23:25.923216 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:23:25.923229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:23:25.923239 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:23:25.923256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:25.923270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:25.923285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:25.923299 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:23:25.923316 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:23:25.923328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:23:25.923338 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:23:25.923352 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:23:25.923361 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:23:25.923372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:23:25.923381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:25.923391 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:23:25.925854 systemd-journald[183]: Collecting audit messages is disabled. Sep 13 00:23:25.925909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:25.925920 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:23:25.925931 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:23:25.925947 systemd-journald[183]: Journal started Sep 13 00:23:25.925977 systemd-journald[183]: Runtime Journal (/run/log/journal/2537ae3d82984d889ab48acaa11c3515) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:23:25.927776 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:23:25.931243 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:23:25.961094 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:23:25.931640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:25.963075 kernel: Bridge firewalling registered Sep 13 00:23:25.962074 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:23:25.966792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:25.967604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:25.976771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:23:25.979660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:23:25.981624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:23:25.986737 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:23:26.015316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:26.018192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:26.021511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:26.029783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:23:26.031404 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:26.036703 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:23:26.047542 dracut-cmdline[218]: dracut-dracut-053 Sep 13 00:23:26.051720 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:23:26.084153 systemd-resolved[224]: Positive Trust Anchors: Sep 13 00:23:26.084858 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:23:26.084899 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:23:26.090731 systemd-resolved[224]: Defaulting to hostname 'linux'. Sep 13 00:23:26.092576 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:23:26.093014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:26.153492 kernel: SCSI subsystem initialized Sep 13 00:23:26.163487 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:23:26.175545 kernel: iscsi: registered transport (tcp) Sep 13 00:23:26.197550 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:23:26.197634 kernel: QLogic iSCSI HBA Driver Sep 13 00:23:26.252043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:23:26.258715 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:23:26.294618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:23:26.294718 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:23:26.296035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:23:26.342506 kernel: raid6: avx2x4 gen() 17690 MB/s Sep 13 00:23:26.359487 kernel: raid6: avx2x2 gen() 15635 MB/s Sep 13 00:23:26.377011 kernel: raid6: avx2x1 gen() 11507 MB/s Sep 13 00:23:26.377106 kernel: raid6: using algorithm avx2x4 gen() 17690 MB/s Sep 13 00:23:26.394865 kernel: raid6: .... xor() 6249 MB/s, rmw enabled Sep 13 00:23:26.394944 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:23:26.417472 kernel: xor: automatically using best checksumming function avx Sep 13 00:23:26.581656 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:23:26.596689 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:23:26.604760 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:26.629880 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 13 00:23:26.636007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:26.646382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:23:26.673025 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Sep 13 00:23:26.719551 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:23:26.725803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:23:26.799861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:26.807717 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:23:26.826510 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:23:26.827717 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:23:26.828804 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:26.829684 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:23:26.834620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:23:26.863465 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:23:26.889461 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 13 00:23:26.894490 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:23:26.901412 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:23:26.903480 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:23:26.920116 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:23:26.920181 kernel: GPT:9289727 != 125829119 Sep 13 00:23:26.920194 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:23:26.920208 kernel: GPT:9289727 != 125829119 Sep 13 00:23:26.920231 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:23:26.920244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:26.920214 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:23:26.920339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:26.923599 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:23:26.924186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:26.924474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:26.926288 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:26.935921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:26.947956 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 13 00:23:26.948216 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 13 00:23:26.993465 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:23:26.993529 kernel: AES CTR mode by8 optimization enabled Sep 13 00:23:27.000937 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:23:27.036058 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Sep 13 00:23:27.036086 kernel: ACPI: bus type USB registered Sep 13 00:23:27.036099 kernel: usbcore: registered new interface driver usbfs Sep 13 00:23:27.036111 kernel: usbcore: registered new interface driver hub Sep 13 00:23:27.036122 kernel: usbcore: registered new device driver usb Sep 13 00:23:27.036133 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Sep 13 00:23:27.040030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:27.048650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:23:27.056838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:23:27.067974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:23:27.078469 kernel: libata version 3.00 loaded. Sep 13 00:23:27.083865 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:23:27.089457 kernel: scsi host1: ata_piix Sep 13 00:23:27.091710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:23:27.098210 kernel: scsi host2: ata_piix Sep 13 00:23:27.098767 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:23:27.098791 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:23:27.092112 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:23:27.101616 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:23:27.106467 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:23:27.108492 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:23:27.108792 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:23:27.111720 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 13 00:23:27.111993 kernel: hub 1-0:1.0: USB hub found Sep 13 00:23:27.112196 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:23:27.114334 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:27.115741 disk-uuid[550]: Primary Header is updated. Sep 13 00:23:27.115741 disk-uuid[550]: Secondary Entries is updated. Sep 13 00:23:27.115741 disk-uuid[550]: Secondary Header is updated. Sep 13 00:23:27.120473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:27.125651 kernel: GPT:disk_guids don't match. Sep 13 00:23:27.125713 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:23:27.125739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:27.131478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:28.130500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:28.130978 disk-uuid[552]: The operation has completed successfully. Sep 13 00:23:28.188310 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:23:28.188475 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:23:28.220695 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:23:28.227455 sh[566]: Success Sep 13 00:23:28.245492 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:23:28.315691 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:23:28.319572 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:23:28.320509 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:23:28.345578 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:23:28.345665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:28.345688 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:23:28.346513 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:23:28.347474 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:23:28.356234 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:23:28.357347 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:23:28.366700 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:23:28.370651 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:23:28.382004 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:23:28.382068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:28.382082 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:23:28.386483 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:23:28.400827 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:23:28.400535 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:23:28.408241 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:23:28.414743 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:23:28.504785 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:23:28.512712 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:23:28.539762 systemd-networkd[750]: lo: Link UP Sep 13 00:23:28.540547 systemd-networkd[750]: lo: Gained carrier Sep 13 00:23:28.544063 systemd-networkd[750]: Enumeration completed Sep 13 00:23:28.544468 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:23:28.545390 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:23:28.545394 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:23:28.545616 systemd[1]: Reached target network.target - Network. Sep 13 00:23:28.549576 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:23:28.549581 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:23:28.550383 systemd-networkd[750]: eth0: Link UP Sep 13 00:23:28.550390 systemd-networkd[750]: eth0: Gained carrier Sep 13 00:23:28.550403 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:23:28.556828 systemd-networkd[750]: eth1: Link UP Sep 13 00:23:28.556836 systemd-networkd[750]: eth1: Gained carrier Sep 13 00:23:28.556848 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:23:28.564629 ignition[651]: Ignition 2.19.0 Sep 13 00:23:28.564640 ignition[651]: Stage: fetch-offline Sep 13 00:23:28.567166 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:23:28.564679 ignition[651]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:28.564688 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:28.564793 ignition[651]: parsed url from cmdline: "" Sep 13 00:23:28.564796 ignition[651]: no config URL provided Sep 13 00:23:28.564802 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:23:28.570562 systemd-networkd[750]: eth0: DHCPv4 address 143.198.134.88/20, gateway 143.198.128.1 acquired from 169.254.169.253 Sep 13 00:23:28.564810 ignition[651]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:23:28.564816 ignition[651]: failed to fetch config: resource requires networking Sep 13 00:23:28.565079 ignition[651]: Ignition finished successfully Sep 13 00:23:28.573565 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Sep 13 00:23:28.575677 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:23:28.595185 ignition[757]: Ignition 2.19.0 Sep 13 00:23:28.595197 ignition[757]: Stage: fetch Sep 13 00:23:28.595710 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:28.595723 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:28.595853 ignition[757]: parsed url from cmdline: "" Sep 13 00:23:28.595860 ignition[757]: no config URL provided Sep 13 00:23:28.595869 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:23:28.595881 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:23:28.595910 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:23:28.626518 ignition[757]: GET result: OK Sep 13 00:23:28.627064 ignition[757]: parsing config with SHA512: 340444ca5de76072cf1af577a7fed17b0911cb76a553d605d9d8533a6d7a30375a80be6d2e342bc54bd0b0a63075b7a99409e5e55dd67e45b4b3e9e9a802545d Sep 13 00:23:28.632413 unknown[757]: fetched base config from "system" Sep 13 00:23:28.632430 unknown[757]: fetched base config from "system" Sep 13 00:23:28.632452 unknown[757]: fetched user config from "digitalocean" Sep 13 00:23:28.633068 ignition[757]: fetch: fetch complete Sep 13 00:23:28.633076 ignition[757]: fetch: fetch passed Sep 13 00:23:28.633145 ignition[757]: Ignition finished successfully Sep 13 00:23:28.636347 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:23:28.642658 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:23:28.663970 ignition[765]: Ignition 2.19.0 Sep 13 00:23:28.663989 ignition[765]: Stage: kargs Sep 13 00:23:28.664310 ignition[765]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:28.664330 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:28.666106 ignition[765]: kargs: kargs passed Sep 13 00:23:28.666191 ignition[765]: Ignition finished successfully Sep 13 00:23:28.667766 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:23:28.674643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:23:28.689523 ignition[771]: Ignition 2.19.0 Sep 13 00:23:28.689543 ignition[771]: Stage: disks Sep 13 00:23:28.689737 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:28.689748 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:28.695807 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:23:28.694364 ignition[771]: disks: disks passed Sep 13 00:23:28.697280 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:23:28.694453 ignition[771]: Ignition finished successfully Sep 13 00:23:28.697725 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:23:28.698321 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:23:28.698995 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:23:28.699567 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:23:28.714825 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:23:28.730072 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:23:28.733930 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:23:28.737586 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:23:28.848785 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:23:28.849517 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:23:28.850412 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:23:28.855594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:23:28.866734 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:23:28.871231 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 13 00:23:28.873637 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:23:28.876540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:23:28.885561 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (787) Sep 13 00:23:28.885589 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:23:28.885602 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:28.885614 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:23:28.877587 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:23:28.884746 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:23:28.893639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:23:28.900552 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:23:28.900966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:23:28.962625 coreos-metadata[789]: Sep 13 00:23:28.960 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:23:28.967552 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:23:28.975566 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:23:28.976821 coreos-metadata[790]: Sep 13 00:23:28.976 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:23:28.979145 coreos-metadata[789]: Sep 13 00:23:28.978 INFO Fetch successful Sep 13 00:23:28.983384 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:23:28.987312 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:23:28.987500 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 13 00:23:28.990076 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:23:28.992802 coreos-metadata[790]: Sep 13 00:23:28.992 INFO Fetch successful Sep 13 00:23:28.997459 coreos-metadata[790]: Sep 13 00:23:28.997 INFO wrote hostname ci-4081.3.5-n-9b8e9ee716 to /sysroot/etc/hostname Sep 13 00:23:28.998354 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:23:29.099503 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:23:29.104592 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:23:29.111801 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:23:29.123512 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:23:29.145226 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:23:29.159457 ignition[910]: INFO : Ignition 2.19.0 Sep 13 00:23:29.159457 ignition[910]: INFO : Stage: mount Sep 13 00:23:29.159457 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:29.159457 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:29.161350 ignition[910]: INFO : mount: mount passed Sep 13 00:23:29.161350 ignition[910]: INFO : Ignition finished successfully Sep 13 00:23:29.162702 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:23:29.169786 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:23:29.344778 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:23:29.351719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:23:29.362490 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (920) Sep 13 00:23:29.364873 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:23:29.364943 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:29.364962 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:23:29.368469 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:23:29.370348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:23:29.393378 ignition[937]: INFO : Ignition 2.19.0 Sep 13 00:23:29.393378 ignition[937]: INFO : Stage: files Sep 13 00:23:29.395001 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:29.395001 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:29.395001 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:23:29.396965 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:23:29.396965 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:23:29.400189 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:23:29.400843 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:23:29.401761 unknown[937]: wrote ssh authorized keys file for user: core Sep 13 00:23:29.402574 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:23:29.403336 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:23:29.404285 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:23:29.570728 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:23:29.959832 systemd-networkd[750]: eth0: Gained IPv6LL Sep 13 00:23:30.338880 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:23:30.338880 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:30.340814 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:23:30.471683 systemd-networkd[750]: eth1: Gained IPv6LL Sep 13 00:23:30.763111 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:23:31.290481 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:31.290481 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:23:31.292008 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:23:31.292645 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:23:31.292645 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:23:31.292645 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:23:31.292645 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:23:31.292645 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:23:31.295863 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:23:31.295863 ignition[937]: INFO : files: files passed Sep 13 00:23:31.295863 ignition[937]: INFO : Ignition finished successfully Sep 13 00:23:31.295427 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:23:31.300719 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:23:31.303326 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:23:31.307418 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:23:31.308043 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:23:31.334597 initrd-setup-root-after-ignition[966]: grep: Sep 13 00:23:31.335666 initrd-setup-root-after-ignition[970]: grep: Sep 13 00:23:31.335666 initrd-setup-root-after-ignition[966]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:31.335666 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:31.337053 initrd-setup-root-after-ignition[970]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:31.337474 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:23:31.338621 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:23:31.345664 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:23:31.378779 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:23:31.378894 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:23:31.379861 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:23:31.380408 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:23:31.381162 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:23:31.393769 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:23:31.409697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:23:31.416692 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:23:31.431926 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:31.432699 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:31.433823 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:23:31.434759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:23:31.434958 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:23:31.435958 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:23:31.436573 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:23:31.437483 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:23:31.438286 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:23:31.439138 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:23:31.440220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:23:31.441300 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:23:31.442387 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:23:31.443324 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:23:31.444281 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:23:31.445198 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:23:31.445392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:23:31.446578 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:31.447318 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:31.447991 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:23:31.448149 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:31.448877 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:23:31.449124 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:23:31.450196 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:23:31.450451 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:23:31.451488 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:23:31.451720 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:23:31.452679 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:23:31.452838 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:23:31.460732 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:23:31.462620 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:23:31.463647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:23:31.464307 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:31.465491 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:23:31.466090 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:23:31.473083 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:23:31.473913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:23:31.494129 ignition[990]: INFO : Ignition 2.19.0 Sep 13 00:23:31.496426 ignition[990]: INFO : Stage: umount Sep 13 00:23:31.496426 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:31.496426 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:23:31.497989 ignition[990]: INFO : umount: umount passed Sep 13 00:23:31.497989 ignition[990]: INFO : Ignition finished successfully Sep 13 00:23:31.499642 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:23:31.499842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:23:31.505892 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:23:31.506009 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:23:31.506501 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:23:31.506550 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:23:31.506893 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:23:31.506930 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:23:31.507346 systemd[1]: Stopped target network.target - Network. Sep 13 00:23:31.507812 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:23:31.507887 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:23:31.508590 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:23:31.509263 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:23:31.512632 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:31.513251 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:23:31.516583 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:23:31.517251 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:23:31.517341 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:23:31.517960 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:23:31.518017 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:23:31.519916 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:23:31.519998 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:23:31.520497 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:23:31.520565 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:23:31.522532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:23:31.523262 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:23:31.526736 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:23:31.527919 systemd-networkd[750]: eth0: DHCPv6 lease lost Sep 13 00:23:31.534220 systemd-networkd[750]: eth1: DHCPv6 lease lost Sep 13 00:23:31.535041 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:23:31.535903 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:23:31.539512 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:23:31.540295 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:23:31.542697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:23:31.543449 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:31.548672 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:23:31.549072 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:23:31.549206 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:23:31.549898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:23:31.549963 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:31.550384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:23:31.555094 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:31.555604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:23:31.555698 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:31.556772 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:31.561301 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:23:31.562373 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:23:31.564396 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:23:31.564537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:23:31.576062 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:23:31.576249 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:31.579262 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:23:31.579817 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:23:31.581885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:23:31.581951 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:31.583174 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:23:31.583215 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:31.584249 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:23:31.584307 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:23:31.585181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:23:31.585235 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:23:31.585915 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:23:31.585963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:31.597811 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:23:31.598285 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:23:31.598378 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:31.598846 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:23:31.598895 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:31.599278 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:23:31.599333 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:31.601659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:31.601716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:31.604872 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:23:31.604995 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:23:31.606377 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:23:31.611770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:23:31.626686 systemd[1]: Switching root. Sep 13 00:23:31.668850 systemd-journald[183]: Journal stopped Sep 13 00:23:32.690054 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:23:32.690162 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:23:32.690185 kernel: SELinux: policy capability open_perms=1 Sep 13 00:23:32.690202 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:23:32.690218 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:23:32.690240 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:23:32.690257 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:23:32.690281 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:23:32.690305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:23:32.690327 kernel: audit: type=1403 audit(1757723011.810:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:23:32.690348 systemd[1]: Successfully loaded SELinux policy in 39.801ms. Sep 13 00:23:32.690380 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.293ms. Sep 13 00:23:32.690402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:23:32.690420 systemd[1]: Detected virtualization kvm. Sep 13 00:23:32.690453 systemd[1]: Detected architecture x86-64. Sep 13 00:23:32.690471 systemd[1]: Detected first boot. Sep 13 00:23:32.690489 systemd[1]: Hostname set to . Sep 13 00:23:32.690508 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:23:32.690534 zram_generator::config[1033]: No configuration found. Sep 13 00:23:32.690554 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:23:32.690572 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:23:32.690591 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:23:32.690610 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:32.690624 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:23:32.690638 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:23:32.690652 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:23:32.690668 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:23:32.690681 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:23:32.690694 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:23:32.690706 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:23:32.690719 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:23:32.690731 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:32.690743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:32.690760 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:23:32.690772 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:23:32.690788 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:23:32.690802 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:23:32.690815 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:23:32.690827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:32.690842 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:23:32.690861 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:23:32.690885 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:23:32.690903 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:23:32.690920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:32.690937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:23:32.690956 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:23:32.690975 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:23:32.690994 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:23:32.691010 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:23:32.691022 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:32.691039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:32.691057 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:32.691070 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:23:32.691083 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:23:32.691097 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:23:32.691111 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:23:32.691124 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:32.691138 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:23:32.691151 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:23:32.691167 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:23:32.691180 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:23:32.691192 systemd[1]: Reached target machines.target - Containers. Sep 13 00:23:32.691205 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:23:32.691217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:32.691229 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:23:32.691242 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:23:32.691254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:32.691270 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:23:32.691283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:32.691295 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:23:32.691312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:32.691325 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:23:32.691338 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:23:32.691357 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:23:32.691375 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:23:32.691398 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:23:32.691411 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:23:32.691424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:23:32.695503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:23:32.695570 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:23:32.695584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:23:32.695598 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:23:32.695611 systemd[1]: Stopped verity-setup.service. Sep 13 00:23:32.695624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:32.695690 systemd-journald[1102]: Collecting audit messages is disabled. Sep 13 00:23:32.695722 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:23:32.695736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:23:32.695752 systemd-journald[1102]: Journal started Sep 13 00:23:32.695777 systemd-journald[1102]: Runtime Journal (/run/log/journal/2537ae3d82984d889ab48acaa11c3515) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:23:32.468711 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:23:32.487253 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:23:32.487751 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:23:32.698452 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:23:32.700305 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:23:32.701933 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:23:32.702378 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:23:32.702900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:23:32.705031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:32.705744 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:23:32.705894 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:23:32.707696 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:23:32.724629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:32.724897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:32.725844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:32.725998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:32.732577 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:32.742682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:23:32.743126 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:23:32.743166 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:23:32.745757 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:23:32.750504 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:23:32.760811 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:23:32.761391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:32.765670 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:23:32.785834 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:23:32.786260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:32.790033 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:23:32.798686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:23:32.819351 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:23:32.826573 systemd-journald[1102]: Time spent on flushing to /var/log/journal/2537ae3d82984d889ab48acaa11c3515 is 95.967ms for 975 entries. Sep 13 00:23:32.826573 systemd-journald[1102]: System Journal (/var/log/journal/2537ae3d82984d889ab48acaa11c3515) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:23:32.973680 systemd-journald[1102]: Received client request to flush runtime journal. Sep 13 00:23:32.973828 kernel: ACPI: bus type drm_connector registered Sep 13 00:23:32.973869 kernel: fuse: init (API version 7.39) Sep 13 00:23:32.973900 kernel: loop: module loaded Sep 13 00:23:32.973940 kernel: loop0: detected capacity change from 0 to 8 Sep 13 00:23:32.973963 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:23:32.825030 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:23:32.830688 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:32.834329 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:23:32.841936 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:23:32.844520 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:23:32.881583 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:23:32.881801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:23:32.899945 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:23:32.900231 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:23:32.911569 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:23:32.912769 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:32.914525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:32.919950 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:32.926077 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:23:32.927937 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:23:32.942718 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:23:32.946289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:23:32.978807 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:23:33.005619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:33.015468 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:23:33.013530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:33.025014 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:23:33.038245 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:23:33.041514 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:23:33.060475 kernel: loop2: detected capacity change from 0 to 142488 Sep 13 00:23:33.066971 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:23:33.102716 systemd-tmpfiles[1134]: ACLs are not supported, ignoring. Sep 13 00:23:33.104517 systemd-tmpfiles[1134]: ACLs are not supported, ignoring. Sep 13 00:23:33.108726 kernel: loop3: detected capacity change from 0 to 224512 Sep 13 00:23:33.138502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:33.144467 kernel: loop4: detected capacity change from 0 to 8 Sep 13 00:23:33.146690 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:23:33.151476 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:23:33.150334 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:23:33.175456 kernel: loop6: detected capacity change from 0 to 142488 Sep 13 00:23:33.192469 kernel: loop7: detected capacity change from 0 to 224512 Sep 13 00:23:33.206722 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 13 00:23:33.209800 (sd-merge)[1176]: Merged extensions into '/usr'. Sep 13 00:23:33.215171 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:23:33.215186 systemd[1]: Reloading... Sep 13 00:23:33.384456 zram_generator::config[1207]: No configuration found. Sep 13 00:23:33.502297 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:23:33.610018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:33.670202 systemd[1]: Reloading finished in 454 ms. Sep 13 00:23:33.696059 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:23:33.697167 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:23:33.700396 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:23:33.711839 systemd[1]: Starting ensure-sysext.service... Sep 13 00:23:33.716727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:23:33.731823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:23:33.752693 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:23:33.752726 systemd[1]: Reloading... Sep 13 00:23:33.842703 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 13 00:23:33.842723 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 13 00:23:33.858034 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:23:33.860648 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:23:33.865680 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:23:33.866098 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 13 00:23:33.866205 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 13 00:23:33.880258 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:23:33.880278 systemd-tmpfiles[1250]: Skipping /boot Sep 13 00:23:33.920562 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:23:33.920578 systemd-tmpfiles[1250]: Skipping /boot Sep 13 00:23:33.961483 zram_generator::config[1282]: No configuration found. Sep 13 00:23:34.127717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:34.183902 systemd[1]: Reloading finished in 430 ms. Sep 13 00:23:34.210153 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:23:34.217153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:34.217931 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:34.243837 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:23:34.248679 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:23:34.251796 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:23:34.258711 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:23:34.262205 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:34.266745 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:23:34.279415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.279658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:34.289852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:34.293185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:34.302848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:34.303422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:34.303573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.307050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.307240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:34.307406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:34.308101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.311897 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.312137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:34.320309 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:23:34.321350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:34.321530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.322227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:34.322916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:34.329572 systemd[1]: Finished ensure-sysext.service. Sep 13 00:23:34.338736 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:23:34.342561 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:23:34.343226 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:34.343387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:34.344840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:34.366842 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:23:34.381986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:34.382265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:34.383270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:34.391290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:23:34.399752 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:23:34.400861 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:23:34.401064 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:23:34.414891 augenrules[1361]: No rules Sep 13 00:23:34.418552 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:23:34.421532 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:23:34.425717 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:23:34.428315 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Sep 13 00:23:34.444813 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:23:34.452178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:23:34.459620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:34.469781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:23:34.548621 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:23:34.549761 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:23:34.654352 systemd-resolved[1330]: Positive Trust Anchors: Sep 13 00:23:34.656869 systemd-networkd[1378]: lo: Link UP Sep 13 00:23:34.657371 systemd-networkd[1378]: lo: Gained carrier Sep 13 00:23:34.658836 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:23:34.659065 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:23:34.661380 systemd-networkd[1378]: Enumeration completed Sep 13 00:23:34.661782 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:23:34.668779 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:23:34.671216 systemd-resolved[1330]: Using system hostname 'ci-4081.3.5-n-9b8e9ee716'. Sep 13 00:23:34.674217 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:23:34.675071 systemd[1]: Reached target network.target - Network. Sep 13 00:23:34.676675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:34.687472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Sep 13 00:23:34.687976 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:23:34.739656 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 13 00:23:34.740223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.740485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:34.748150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:34.756765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:34.759706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:34.761637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:34.761728 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:23:34.761755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:34.791177 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:23:34.795097 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 13 00:23:34.797054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:34.797521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:34.799304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:34.799795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:34.812720 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:34.820178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:23:34.824119 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:34.825411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:34.838736 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:23:34.839382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:34.876538 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:23:34.899713 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:23:34.906912 systemd-networkd[1378]: eth1: Configuring with /run/systemd/network/10-12:8c:57:95:68:2a.network. Sep 13 00:23:34.909198 systemd-networkd[1378]: eth1: Link UP Sep 13 00:23:34.909212 systemd-networkd[1378]: eth1: Gained carrier Sep 13 00:23:34.918288 systemd-networkd[1378]: eth0: Configuring with /run/systemd/network/10-66:d0:f9:ac:99:8c.network. Sep 13 00:23:34.921683 systemd-networkd[1378]: eth0: Link UP Sep 13 00:23:34.921697 systemd-networkd[1378]: eth0: Gained carrier Sep 13 00:23:34.924547 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:23:34.926712 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:34.928563 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:34.938460 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:23:35.000690 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:23:35.039480 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:23:35.054677 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 13 00:23:35.054790 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 13 00:23:35.059653 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:23:35.062550 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:23:35.062652 kernel: [drm] features: -context_init Sep 13 00:23:35.063729 kernel: [drm] number of scanouts: 1 Sep 13 00:23:35.064052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:35.066472 kernel: [drm] number of cap sets: 0 Sep 13 00:23:35.069609 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 13 00:23:35.079474 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:23:35.081763 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:23:35.093781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:35.094448 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:23:35.094519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:35.108745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:35.122354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:35.122690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:35.132808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:35.259295 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:23:35.291019 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:23:35.298845 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:23:35.299552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:35.323400 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:23:35.357597 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:23:35.358944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:35.359078 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:23:35.359299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:23:35.359428 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:23:35.359816 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:23:35.359984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:23:35.360069 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:23:35.360144 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:23:35.360171 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:23:35.360223 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:23:35.362797 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:23:35.364891 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:23:35.372313 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:23:35.380886 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:23:35.384833 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:23:35.387069 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:23:35.388978 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:23:35.390624 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:23:35.391396 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:23:35.391457 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:23:35.398770 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:23:35.412025 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:23:35.418758 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:23:35.425765 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:23:35.442889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:23:35.445126 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:23:35.453173 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:23:35.456120 jq[1441]: false Sep 13 00:23:35.462720 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:23:35.490657 coreos-metadata[1439]: Sep 13 00:23:35.486 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:23:35.472834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:23:35.477693 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:23:35.494750 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:23:35.497109 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:23:35.497746 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:23:35.502108 coreos-metadata[1439]: Sep 13 00:23:35.499 INFO Fetch successful Sep 13 00:23:35.500698 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:23:35.505682 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:23:35.510393 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:23:35.523887 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:23:35.524544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:23:35.526749 dbus-daemon[1440]: [system] SELinux support is enabled Sep 13 00:23:35.526966 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:23:35.537194 jq[1451]: true Sep 13 00:23:35.565492 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:23:35.565546 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:23:35.569545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:23:35.569662 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 13 00:23:35.569691 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:23:35.589263 jq[1457]: true Sep 13 00:23:35.607875 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:23:35.608122 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:23:35.623463 extend-filesystems[1444]: Found loop4 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found loop5 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found loop6 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found loop7 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda1 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda2 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda3 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found usr Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda4 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda6 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda7 Sep 13 00:23:35.623463 extend-filesystems[1444]: Found vda9 Sep 13 00:23:35.623463 extend-filesystems[1444]: Checking size of /dev/vda9 Sep 13 00:23:35.650021 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:23:35.736071 update_engine[1450]: I20250913 00:23:35.670255 1450 main.cc:92] Flatcar Update Engine starting Sep 13 00:23:35.736071 update_engine[1450]: I20250913 00:23:35.696698 1450 update_check_scheduler.cc:74] Next update check in 3m45s Sep 13 00:23:35.740658 tar[1454]: linux-amd64/LICENSE Sep 13 00:23:35.740658 tar[1454]: linux-amd64/helm Sep 13 00:23:35.696624 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:23:35.758651 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:23:35.758737 extend-filesystems[1444]: Resized partition /dev/vda9 Sep 13 00:23:35.709954 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:23:35.763934 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:23:35.726905 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:23:35.729685 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:23:35.752563 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:23:35.758056 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:23:35.830507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Sep 13 00:23:35.875474 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:23:35.900476 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:23:35.900476 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:23:35.900476 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:23:35.910549 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Sep 13 00:23:35.910549 extend-filesystems[1444]: Found vdb Sep 13 00:23:35.920869 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:23:35.921271 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:23:35.932369 systemd-logind[1449]: New seat seat0. Sep 13 00:23:35.934870 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:23:35.934898 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:23:35.935308 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:23:35.936620 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:23:35.939689 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:23:35.953863 systemd[1]: Starting sshkeys.service... Sep 13 00:23:36.012338 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:23:36.031099 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:23:36.039821 systemd-networkd[1378]: eth1: Gained IPv6LL Sep 13 00:23:36.040481 systemd-networkd[1378]: eth0: Gained IPv6LL Sep 13 00:23:36.041177 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:36.052812 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:23:36.058300 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:23:36.072196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:36.089319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:23:36.246837 coreos-metadata[1507]: Sep 13 00:23:36.245 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:23:36.273207 coreos-metadata[1507]: Sep 13 00:23:36.265 INFO Fetch successful Sep 13 00:23:36.277579 unknown[1507]: wrote ssh authorized keys file for user: core Sep 13 00:23:36.297052 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:23:36.334071 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:23:36.341002 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:23:36.342519 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:23:36.345267 systemd[1]: Finished sshkeys.service. Sep 13 00:23:36.564050 containerd[1465]: time="2025-09-13T00:23:36.563741363Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:23:36.577465 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:23:36.675323 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:23:36.677849 containerd[1465]: time="2025-09-13T00:23:36.677796617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.689901 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:23:36.696470 containerd[1465]: time="2025-09-13T00:23:36.696395485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:23:36.696664 containerd[1465]: time="2025-09-13T00:23:36.696645238Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:23:36.696740 containerd[1465]: time="2025-09-13T00:23:36.696729563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:23:36.698776 containerd[1465]: time="2025-09-13T00:23:36.698735120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:23:36.700386 containerd[1465]: time="2025-09-13T00:23:36.700351372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.700631 containerd[1465]: time="2025-09-13T00:23:36.700610388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:23:36.701914 containerd[1465]: time="2025-09-13T00:23:36.701880451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.702389 containerd[1465]: time="2025-09-13T00:23:36.702364284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:23:36.703566 containerd[1465]: time="2025-09-13T00:23:36.703543700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.703654 containerd[1465]: time="2025-09-13T00:23:36.703638882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:23:36.703714 containerd[1465]: time="2025-09-13T00:23:36.703704236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.703976 containerd[1465]: time="2025-09-13T00:23:36.703957901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.704395 containerd[1465]: time="2025-09-13T00:23:36.704352759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:23:36.704765 containerd[1465]: time="2025-09-13T00:23:36.704743756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:23:36.704845 containerd[1465]: time="2025-09-13T00:23:36.704832207Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:23:36.706668 containerd[1465]: time="2025-09-13T00:23:36.706640920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:23:36.706829 containerd[1465]: time="2025-09-13T00:23:36.706813434Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:23:36.717771 containerd[1465]: time="2025-09-13T00:23:36.717713505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:23:36.718055 containerd[1465]: time="2025-09-13T00:23:36.718023324Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:23:36.718228 containerd[1465]: time="2025-09-13T00:23:36.718207203Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:23:36.718318 containerd[1465]: time="2025-09-13T00:23:36.718300438Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:23:36.718445 containerd[1465]: time="2025-09-13T00:23:36.718413956Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:23:36.718817 containerd[1465]: time="2025-09-13T00:23:36.718788929Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:23:36.720141 containerd[1465]: time="2025-09-13T00:23:36.720103236Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:23:36.720478 containerd[1465]: time="2025-09-13T00:23:36.720456679Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:23:36.720552 containerd[1465]: time="2025-09-13T00:23:36.720540782Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:23:36.721055 containerd[1465]: time="2025-09-13T00:23:36.721032359Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:23:36.721185 containerd[1465]: time="2025-09-13T00:23:36.721169763Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721268 containerd[1465]: time="2025-09-13T00:23:36.721255112Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721319 containerd[1465]: time="2025-09-13T00:23:36.721309687Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721367 containerd[1465]: time="2025-09-13T00:23:36.721358002Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721413 containerd[1465]: time="2025-09-13T00:23:36.721404045Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721479 containerd[1465]: time="2025-09-13T00:23:36.721469063Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721698 containerd[1465]: time="2025-09-13T00:23:36.721683231Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721776 containerd[1465]: time="2025-09-13T00:23:36.721764558Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:23:36.721852 containerd[1465]: time="2025-09-13T00:23:36.721823791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.721902 containerd[1465]: time="2025-09-13T00:23:36.721893300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.721956 containerd[1465]: time="2025-09-13T00:23:36.721946832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722003 containerd[1465]: time="2025-09-13T00:23:36.721994970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722055 containerd[1465]: time="2025-09-13T00:23:36.722045502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722106 containerd[1465]: time="2025-09-13T00:23:36.722097178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722321 containerd[1465]: time="2025-09-13T00:23:36.722152288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722402 containerd[1465]: time="2025-09-13T00:23:36.722390596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722475 containerd[1465]: time="2025-09-13T00:23:36.722456217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722783 containerd[1465]: time="2025-09-13T00:23:36.722765773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.722855 containerd[1465]: time="2025-09-13T00:23:36.722841291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724373939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724403750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724443731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724483270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724502845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724515262Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724576475Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724597770Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724610197Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724622403Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724632228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724643801Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724655416Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:23:36.725455 containerd[1465]: time="2025-09-13T00:23:36.724665947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:23:36.725813 containerd[1465]: time="2025-09-13T00:23:36.725010722Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:23:36.725813 containerd[1465]: time="2025-09-13T00:23:36.725115741Z" level=info msg="Connect containerd service" Sep 13 00:23:36.725813 containerd[1465]: time="2025-09-13T00:23:36.725159202Z" level=info msg="using legacy CRI server" Sep 13 00:23:36.725813 containerd[1465]: time="2025-09-13T00:23:36.725166836Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:23:36.725813 containerd[1465]: time="2025-09-13T00:23:36.725262145Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.728330804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.728876250Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.728941804Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.728985290Z" level=info msg="Start subscribing containerd event" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.729070108Z" level=info msg="Start recovering state" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.729150836Z" level=info msg="Start event monitor" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.729173278Z" level=info msg="Start snapshots syncer" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.729188093Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:23:36.729454 containerd[1465]: time="2025-09-13T00:23:36.729197781Z" level=info msg="Start streaming server" Sep 13 00:23:36.729979 containerd[1465]: time="2025-09-13T00:23:36.729947137Z" level=info msg="containerd successfully booted in 0.175777s" Sep 13 00:23:36.730370 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:23:36.733718 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:23:36.735754 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:23:36.752265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:23:36.805460 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:23:36.815952 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:23:36.827061 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:23:36.829217 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:23:36.980147 tar[1454]: linux-amd64/README.md Sep 13 00:23:36.998669 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:23:37.744233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:37.747112 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:23:37.751057 systemd[1]: Startup finished in 1.013s (kernel) + 6.112s (initrd) + 5.979s (userspace) = 13.105s. Sep 13 00:23:37.756704 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:38.421924 kubelet[1562]: E0913 00:23:38.421825 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:38.424569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:38.424735 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:38.425300 systemd[1]: kubelet.service: Consumed 1.375s CPU time. Sep 13 00:23:39.278124 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:23:39.286845 systemd[1]: Started sshd@0-143.198.134.88:22-139.178.68.195:50824.service - OpenSSH per-connection server daemon (139.178.68.195:50824). Sep 13 00:23:39.351409 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 50824 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:39.353410 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:39.363169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:23:39.378104 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:23:39.384610 systemd-logind[1449]: New session 1 of user core. Sep 13 00:23:39.398566 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:23:39.417965 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:23:39.421723 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:23:39.549377 systemd[1578]: Queued start job for default target default.target. Sep 13 00:23:39.555655 systemd[1578]: Created slice app.slice - User Application Slice. Sep 13 00:23:39.555693 systemd[1578]: Reached target paths.target - Paths. Sep 13 00:23:39.555708 systemd[1578]: Reached target timers.target - Timers. Sep 13 00:23:39.557205 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:23:39.577721 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:23:39.577861 systemd[1578]: Reached target sockets.target - Sockets. Sep 13 00:23:39.577881 systemd[1578]: Reached target basic.target - Basic System. Sep 13 00:23:39.577943 systemd[1578]: Reached target default.target - Main User Target. Sep 13 00:23:39.577986 systemd[1578]: Startup finished in 147ms. Sep 13 00:23:39.578719 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:23:39.586711 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:23:39.660069 systemd[1]: Started sshd@1-143.198.134.88:22-139.178.68.195:50834.service - OpenSSH per-connection server daemon (139.178.68.195:50834). Sep 13 00:23:39.700268 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 50834 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:39.701942 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:39.707557 systemd-logind[1449]: New session 2 of user core. Sep 13 00:23:39.715671 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:23:39.780225 sshd[1589]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:39.792499 systemd[1]: sshd@1-143.198.134.88:22-139.178.68.195:50834.service: Deactivated successfully. Sep 13 00:23:39.794866 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:23:39.796600 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:23:39.802925 systemd[1]: Started sshd@2-143.198.134.88:22-139.178.68.195:50836.service - OpenSSH per-connection server daemon (139.178.68.195:50836). Sep 13 00:23:39.805018 systemd-logind[1449]: Removed session 2. Sep 13 00:23:39.841180 sshd[1596]: Accepted publickey for core from 139.178.68.195 port 50836 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:39.843646 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:39.849605 systemd-logind[1449]: New session 3 of user core. Sep 13 00:23:39.857731 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:23:39.914597 sshd[1596]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:39.930901 systemd[1]: sshd@2-143.198.134.88:22-139.178.68.195:50836.service: Deactivated successfully. Sep 13 00:23:39.933135 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:23:39.935647 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:23:39.939910 systemd[1]: Started sshd@3-143.198.134.88:22-139.178.68.195:50848.service - OpenSSH per-connection server daemon (139.178.68.195:50848). Sep 13 00:23:39.941949 systemd-logind[1449]: Removed session 3. Sep 13 00:23:39.982093 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 50848 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:39.983851 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:39.989265 systemd-logind[1449]: New session 4 of user core. Sep 13 00:23:39.999086 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:23:40.064214 sshd[1603]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:40.081735 systemd[1]: sshd@3-143.198.134.88:22-139.178.68.195:50848.service: Deactivated successfully. Sep 13 00:23:40.083849 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:23:40.085645 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:23:40.097028 systemd[1]: Started sshd@4-143.198.134.88:22-139.178.68.195:57240.service - OpenSSH per-connection server daemon (139.178.68.195:57240). Sep 13 00:23:40.099048 systemd-logind[1449]: Removed session 4. Sep 13 00:23:40.136084 sshd[1610]: Accepted publickey for core from 139.178.68.195 port 57240 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:40.138207 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:40.143186 systemd-logind[1449]: New session 5 of user core. Sep 13 00:23:40.150691 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:23:40.221085 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:23:40.221423 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:40.235681 sudo[1613]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:40.240030 sshd[1610]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:40.251836 systemd[1]: sshd@4-143.198.134.88:22-139.178.68.195:57240.service: Deactivated successfully. Sep 13 00:23:40.255148 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:23:40.258669 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:23:40.265946 systemd[1]: Started sshd@5-143.198.134.88:22-139.178.68.195:57244.service - OpenSSH per-connection server daemon (139.178.68.195:57244). Sep 13 00:23:40.267812 systemd-logind[1449]: Removed session 5. Sep 13 00:23:40.307066 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 57244 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:40.308211 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:40.313551 systemd-logind[1449]: New session 6 of user core. Sep 13 00:23:40.316638 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:23:40.377972 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:23:40.378317 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:40.382748 sudo[1622]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:40.390003 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:23:40.390394 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:40.409218 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:23:40.411131 auditctl[1625]: No rules Sep 13 00:23:40.413541 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:23:40.413960 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:23:40.421269 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:23:40.463985 augenrules[1643]: No rules Sep 13 00:23:40.465528 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:23:40.467613 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:40.471603 sshd[1618]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:40.492589 systemd[1]: sshd@5-143.198.134.88:22-139.178.68.195:57244.service: Deactivated successfully. Sep 13 00:23:40.494576 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:23:40.496605 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:23:40.502610 systemd[1]: Started sshd@6-143.198.134.88:22-139.178.68.195:57260.service - OpenSSH per-connection server daemon (139.178.68.195:57260). Sep 13 00:23:40.504687 systemd-logind[1449]: Removed session 6. Sep 13 00:23:40.541358 sshd[1651]: Accepted publickey for core from 139.178.68.195 port 57260 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:23:40.543058 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:40.549510 systemd-logind[1449]: New session 7 of user core. Sep 13 00:23:40.559715 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:23:40.619575 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:23:40.619950 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:41.050683 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:23:41.061291 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:23:41.534137 dockerd[1671]: time="2025-09-13T00:23:41.534047548Z" level=info msg="Starting up" Sep 13 00:23:41.692806 dockerd[1671]: time="2025-09-13T00:23:41.692745589Z" level=info msg="Loading containers: start." Sep 13 00:23:41.824619 kernel: Initializing XFRM netlink socket Sep 13 00:23:41.854752 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:41.855514 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:41.872412 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:41.918222 systemd-networkd[1378]: docker0: Link UP Sep 13 00:23:41.919130 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Sep 13 00:23:41.937827 dockerd[1671]: time="2025-09-13T00:23:41.937774545Z" level=info msg="Loading containers: done." Sep 13 00:23:41.958495 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3563257805-merged.mount: Deactivated successfully. Sep 13 00:23:41.959456 dockerd[1671]: time="2025-09-13T00:23:41.959187270Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:23:41.959456 dockerd[1671]: time="2025-09-13T00:23:41.959369693Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:23:41.960752 dockerd[1671]: time="2025-09-13T00:23:41.960640344Z" level=info msg="Daemon has completed initialization" Sep 13 00:23:42.002358 dockerd[1671]: time="2025-09-13T00:23:42.002258142Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:23:42.002859 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:23:42.894073 containerd[1465]: time="2025-09-13T00:23:42.893524752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:23:43.738001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341327836.mount: Deactivated successfully. Sep 13 00:23:44.869069 containerd[1465]: time="2025-09-13T00:23:44.868991705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:44.870222 containerd[1465]: time="2025-09-13T00:23:44.870141888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:23:44.871506 containerd[1465]: time="2025-09-13T00:23:44.870598568Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:44.874559 containerd[1465]: time="2025-09-13T00:23:44.874501545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:44.876483 containerd[1465]: time="2025-09-13T00:23:44.876299140Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.982714859s" Sep 13 00:23:44.876483 containerd[1465]: time="2025-09-13T00:23:44.876368913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:23:44.877578 containerd[1465]: time="2025-09-13T00:23:44.877540852Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:23:46.632389 containerd[1465]: time="2025-09-13T00:23:46.631645569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:46.633705 containerd[1465]: time="2025-09-13T00:23:46.633584693Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:23:46.635277 containerd[1465]: time="2025-09-13T00:23:46.635231953Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:46.640100 containerd[1465]: time="2025-09-13T00:23:46.640045770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:46.641024 containerd[1465]: time="2025-09-13T00:23:46.640981588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.763395664s" Sep 13 00:23:46.641151 containerd[1465]: time="2025-09-13T00:23:46.641055065Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:23:46.642496 containerd[1465]: time="2025-09-13T00:23:46.642041064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:23:48.039470 containerd[1465]: time="2025-09-13T00:23:48.038813780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:48.040046 containerd[1465]: time="2025-09-13T00:23:48.039996185Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:23:48.040881 containerd[1465]: time="2025-09-13T00:23:48.040256996Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:48.043996 containerd[1465]: time="2025-09-13T00:23:48.043957953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:48.045287 containerd[1465]: time="2025-09-13T00:23:48.045241665Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.403167831s" Sep 13 00:23:48.045287 containerd[1465]: time="2025-09-13T00:23:48.045284932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:23:48.046646 containerd[1465]: time="2025-09-13T00:23:48.046605313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:23:48.621678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:48.630655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:48.895660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:48.898634 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:48.971831 kubelet[1895]: E0913 00:23:48.971778 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:48.977475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:48.977664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:49.282950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614869779.mount: Deactivated successfully. Sep 13 00:23:49.780130 containerd[1465]: time="2025-09-13T00:23:49.780075969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:49.782054 containerd[1465]: time="2025-09-13T00:23:49.781993080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:23:49.782708 containerd[1465]: time="2025-09-13T00:23:49.782671731Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:49.785283 containerd[1465]: time="2025-09-13T00:23:49.785235401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:49.786159 containerd[1465]: time="2025-09-13T00:23:49.786112889Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.739476738s" Sep 13 00:23:49.786353 containerd[1465]: time="2025-09-13T00:23:49.786327743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:23:49.787464 containerd[1465]: time="2025-09-13T00:23:49.787418515Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:23:49.788990 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 13 00:23:50.564519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546135458.mount: Deactivated successfully. Sep 13 00:23:51.409750 containerd[1465]: time="2025-09-13T00:23:51.409695796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:51.411154 containerd[1465]: time="2025-09-13T00:23:51.410823339Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:23:51.411955 containerd[1465]: time="2025-09-13T00:23:51.411909649Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:51.417846 containerd[1465]: time="2025-09-13T00:23:51.417783855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:51.418986 containerd[1465]: time="2025-09-13T00:23:51.418930975Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.631284153s" Sep 13 00:23:51.418986 containerd[1465]: time="2025-09-13T00:23:51.418979838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:23:51.420461 containerd[1465]: time="2025-09-13T00:23:51.420420850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:23:52.081876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671696762.mount: Deactivated successfully. Sep 13 00:23:52.084578 containerd[1465]: time="2025-09-13T00:23:52.084294630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:52.084916 containerd[1465]: time="2025-09-13T00:23:52.084871758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:23:52.086368 containerd[1465]: time="2025-09-13T00:23:52.085087913Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:52.087457 containerd[1465]: time="2025-09-13T00:23:52.087177410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:52.088258 containerd[1465]: time="2025-09-13T00:23:52.088226658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.66299ms" Sep 13 00:23:52.088258 containerd[1465]: time="2025-09-13T00:23:52.088258963Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:23:52.089173 containerd[1465]: time="2025-09-13T00:23:52.089111426Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:23:52.871644 systemd-resolved[1330]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 13 00:23:52.886994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926134151.mount: Deactivated successfully. Sep 13 00:23:54.608904 containerd[1465]: time="2025-09-13T00:23:54.608826505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:54.610736 containerd[1465]: time="2025-09-13T00:23:54.610668227Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:23:54.611832 containerd[1465]: time="2025-09-13T00:23:54.611783443Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:54.616468 containerd[1465]: time="2025-09-13T00:23:54.615006662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:54.617510 containerd[1465]: time="2025-09-13T00:23:54.617457845Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.52828792s" Sep 13 00:23:54.617714 containerd[1465]: time="2025-09-13T00:23:54.617690367Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:23:57.389558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:57.397866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:57.436658 systemd[1]: Reloading requested from client PID 2043 ('systemctl') (unit session-7.scope)... Sep 13 00:23:57.436683 systemd[1]: Reloading... Sep 13 00:23:57.583459 zram_generator::config[2085]: No configuration found. Sep 13 00:23:57.701736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:57.785095 systemd[1]: Reloading finished in 347 ms. Sep 13 00:23:57.834170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:23:57.834276 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:23:57.834582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:57.840900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:57.993920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:58.003288 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:23:58.058287 kubelet[2135]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:23:58.059041 kubelet[2135]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:23:58.059041 kubelet[2135]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:23:58.059041 kubelet[2135]: I0913 00:23:58.058968 2135 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:23:58.435388 kubelet[2135]: I0913 00:23:58.435313 2135 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:23:58.435388 kubelet[2135]: I0913 00:23:58.435379 2135 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:23:58.435867 kubelet[2135]: I0913 00:23:58.435836 2135 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:23:58.465340 kubelet[2135]: I0913 00:23:58.464768 2135 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:23:58.467193 kubelet[2135]: E0913 00:23:58.467147 2135 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.134.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:58.479205 kubelet[2135]: E0913 00:23:58.479152 2135 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:23:58.479205 kubelet[2135]: I0913 00:23:58.479205 2135 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:23:58.484227 kubelet[2135]: I0913 00:23:58.483764 2135 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:23:58.487820 kubelet[2135]: I0913 00:23:58.487714 2135 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:23:58.488420 kubelet[2135]: I0913 00:23:58.488098 2135 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-9b8e9ee716","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:23:58.489091 kubelet[2135]: I0913 00:23:58.488681 2135 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:23:58.489091 kubelet[2135]: I0913 00:23:58.488703 2135 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:23:58.489091 kubelet[2135]: I0913 00:23:58.488863 2135 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:23:58.492568 kubelet[2135]: I0913 00:23:58.492529 2135 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:23:58.492732 kubelet[2135]: I0913 00:23:58.492722 2135 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:23:58.492823 kubelet[2135]: I0913 00:23:58.492814 2135 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:23:58.492881 kubelet[2135]: I0913 00:23:58.492873 2135 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:23:58.503838 kubelet[2135]: W0913 00:23:58.503107 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.134.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-9b8e9ee716&limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:58.503838 kubelet[2135]: E0913 00:23:58.503223 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.134.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-9b8e9ee716&limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:58.504249 kubelet[2135]: W0913 00:23:58.503833 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.134.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:58.504249 kubelet[2135]: E0913 00:23:58.503896 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.134.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:58.506589 kubelet[2135]: I0913 00:23:58.506538 2135 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:23:58.511268 kubelet[2135]: I0913 00:23:58.510914 2135 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:23:58.511739 kubelet[2135]: W0913 00:23:58.511707 2135 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:23:58.512783 kubelet[2135]: I0913 00:23:58.512755 2135 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:23:58.512882 kubelet[2135]: I0913 00:23:58.512806 2135 server.go:1287] "Started kubelet" Sep 13 00:23:58.513464 kubelet[2135]: I0913 00:23:58.513068 2135 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:23:58.513464 kubelet[2135]: I0913 00:23:58.513155 2135 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:23:58.513667 kubelet[2135]: I0913 00:23:58.513646 2135 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:23:58.514360 kubelet[2135]: I0913 00:23:58.514342 2135 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:23:58.517367 kubelet[2135]: I0913 00:23:58.517344 2135 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:23:58.521755 kubelet[2135]: E0913 00:23:58.518731 2135 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.134.88:6443/api/v1/namespaces/default/events\": dial tcp 143.198.134.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-9b8e9ee716.1864afca6c1302e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-9b8e9ee716,UID:ci-4081.3.5-n-9b8e9ee716,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-9b8e9ee716,},FirstTimestamp:2025-09-13 00:23:58.512775905 +0000 UTC m=+0.504004724,LastTimestamp:2025-09-13 00:23:58.512775905 +0000 UTC m=+0.504004724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-9b8e9ee716,}" Sep 13 00:23:58.522460 kubelet[2135]: I0913 00:23:58.522079 2135 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:23:58.524411 kubelet[2135]: E0913 00:23:58.523957 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" Sep 13 00:23:58.524411 kubelet[2135]: I0913 00:23:58.524002 2135 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:23:58.524411 kubelet[2135]: I0913 00:23:58.524233 2135 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:23:58.524411 kubelet[2135]: I0913 00:23:58.524290 2135 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:23:58.525463 kubelet[2135]: W0913 00:23:58.524772 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.134.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:58.525463 kubelet[2135]: E0913 00:23:58.524833 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.134.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:58.525463 kubelet[2135]: E0913 00:23:58.525067 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.134.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-9b8e9ee716?timeout=10s\": dial tcp 143.198.134.88:6443: connect: connection refused" interval="200ms" Sep 13 00:23:58.528721 kubelet[2135]: I0913 00:23:58.528677 2135 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:23:58.529178 kubelet[2135]: I0913 00:23:58.529155 2135 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:23:58.531549 kubelet[2135]: E0913 00:23:58.531504 2135 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:23:58.532369 kubelet[2135]: I0913 00:23:58.532351 2135 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:23:58.556726 kubelet[2135]: I0913 00:23:58.556626 2135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:23:58.559867 kubelet[2135]: I0913 00:23:58.559827 2135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:23:58.560169 kubelet[2135]: I0913 00:23:58.560149 2135 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:23:58.560305 kubelet[2135]: I0913 00:23:58.560289 2135 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:23:58.560389 kubelet[2135]: I0913 00:23:58.560379 2135 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:23:58.560585 kubelet[2135]: E0913 00:23:58.560541 2135 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:23:58.567546 kubelet[2135]: W0913 00:23:58.566982 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.134.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:58.567546 kubelet[2135]: E0913 00:23:58.567143 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.134.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:58.570464 kubelet[2135]: I0913 00:23:58.570418 2135 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:23:58.570464 kubelet[2135]: I0913 00:23:58.570449 2135 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:23:58.570464 kubelet[2135]: I0913 00:23:58.570467 2135 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:23:58.572342 kubelet[2135]: I0913 00:23:58.572293 2135 policy_none.go:49] "None policy: Start" Sep 13 00:23:58.572342 kubelet[2135]: I0913 00:23:58.572337 2135 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:23:58.572575 kubelet[2135]: I0913 00:23:58.572358 2135 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:23:58.579209 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:23:58.593068 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:23:58.597236 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:23:58.610269 kubelet[2135]: I0913 00:23:58.609614 2135 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:23:58.610269 kubelet[2135]: I0913 00:23:58.609953 2135 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:23:58.610269 kubelet[2135]: I0913 00:23:58.609975 2135 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:23:58.610612 kubelet[2135]: I0913 00:23:58.610295 2135 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:23:58.613141 kubelet[2135]: E0913 00:23:58.613113 2135 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:23:58.613514 kubelet[2135]: E0913 00:23:58.613348 2135 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-9b8e9ee716\" not found" Sep 13 00:23:58.670187 systemd[1]: Created slice kubepods-burstable-pod183f4ed0315d4889c6428fde441ec5fb.slice - libcontainer container kubepods-burstable-pod183f4ed0315d4889c6428fde441ec5fb.slice. Sep 13 00:23:58.678393 kubelet[2135]: E0913 00:23:58.678339 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.682354 systemd[1]: Created slice kubepods-burstable-pod699d746037ca31db9a67c638e21ca7c7.slice - libcontainer container kubepods-burstable-pod699d746037ca31db9a67c638e21ca7c7.slice. Sep 13 00:23:58.692170 kubelet[2135]: E0913 00:23:58.692003 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.696336 systemd[1]: Created slice kubepods-burstable-pod08acef6c8bb86972a9f7965fb363233a.slice - libcontainer container kubepods-burstable-pod08acef6c8bb86972a9f7965fb363233a.slice. Sep 13 00:23:58.698710 kubelet[2135]: E0913 00:23:58.698593 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.712734 kubelet[2135]: I0913 00:23:58.712110 2135 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.712734 kubelet[2135]: E0913 00:23:58.712664 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.134.88:6443/api/v1/nodes\": dial tcp 143.198.134.88:6443: connect: connection refused" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.725207 kubelet[2135]: I0913 00:23:58.725150 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/183f4ed0315d4889c6428fde441ec5fb-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" (UID: \"183f4ed0315d4889c6428fde441ec5fb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.725207 kubelet[2135]: I0913 00:23:58.725198 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/183f4ed0315d4889c6428fde441ec5fb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" (UID: \"183f4ed0315d4889c6428fde441ec5fb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.725207 kubelet[2135]: I0913 00:23:58.725219 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/183f4ed0315d4889c6428fde441ec5fb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" (UID: \"183f4ed0315d4889c6428fde441ec5fb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.725774 kubelet[2135]: E0913 00:23:58.725722 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.134.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-9b8e9ee716?timeout=10s\": dial tcp 143.198.134.88:6443: connect: connection refused" interval="400ms" Sep 13 00:23:58.826163 kubelet[2135]: I0913 00:23:58.826067 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.826163 kubelet[2135]: I0913 00:23:58.826137 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.826508 kubelet[2135]: I0913 00:23:58.826377 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.826508 kubelet[2135]: I0913 00:23:58.826413 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.826508 kubelet[2135]: I0913 00:23:58.826459 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.826508 kubelet[2135]: I0913 00:23:58.826477 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08acef6c8bb86972a9f7965fb363233a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-9b8e9ee716\" (UID: \"08acef6c8bb86972a9f7965fb363233a\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.914627 kubelet[2135]: I0913 00:23:58.914590 2135 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.915270 kubelet[2135]: E0913 00:23:58.915234 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.134.88:6443/api/v1/nodes\": dial tcp 143.198.134.88:6443: connect: connection refused" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:58.979544 kubelet[2135]: E0913 00:23:58.979370 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:23:58.982943 containerd[1465]: time="2025-09-13T00:23:58.982894643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-9b8e9ee716,Uid:183f4ed0315d4889c6428fde441ec5fb,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:58.984842 systemd-resolved[1330]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Sep 13 00:23:58.994785 kubelet[2135]: E0913 00:23:58.994745 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:23:58.999531 containerd[1465]: time="2025-09-13T00:23:58.999297752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-9b8e9ee716,Uid:699d746037ca31db9a67c638e21ca7c7,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:58.999762 kubelet[2135]: E0913 00:23:58.999737 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:23:59.000990 containerd[1465]: time="2025-09-13T00:23:59.000733477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-9b8e9ee716,Uid:08acef6c8bb86972a9f7965fb363233a,Namespace:kube-system,Attempt:0,}" Sep 13 00:23:59.126858 kubelet[2135]: E0913 00:23:59.126786 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.134.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-9b8e9ee716?timeout=10s\": dial tcp 143.198.134.88:6443: connect: connection refused" interval="800ms" Sep 13 00:23:59.312786 kubelet[2135]: W0913 00:23:59.312592 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.134.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-9b8e9ee716&limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:59.312786 kubelet[2135]: E0913 00:23:59.312668 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.134.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-9b8e9ee716&limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:59.316934 kubelet[2135]: I0913 00:23:59.316893 2135 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:59.317353 kubelet[2135]: E0913 00:23:59.317322 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.134.88:6443/api/v1/nodes\": dial tcp 143.198.134.88:6443: connect: connection refused" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:23:59.487774 kubelet[2135]: W0913 00:23:59.487700 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.134.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:59.487774 kubelet[2135]: E0913 00:23:59.487776 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.134.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:59.632674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693997389.mount: Deactivated successfully. Sep 13 00:23:59.639459 containerd[1465]: time="2025-09-13T00:23:59.638026992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.639459 containerd[1465]: time="2025-09-13T00:23:59.639048486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.640318 containerd[1465]: time="2025-09-13T00:23:59.640277371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:23:59.640450 containerd[1465]: time="2025-09-13T00:23:59.640412182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:23:59.640603 containerd[1465]: time="2025-09-13T00:23:59.640584512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.640675 containerd[1465]: time="2025-09-13T00:23:59.640625925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:23:59.644491 containerd[1465]: time="2025-09-13T00:23:59.644417179Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.645648 containerd[1465]: time="2025-09-13T00:23:59.645607023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 662.614307ms" Sep 13 00:23:59.647586 containerd[1465]: time="2025-09-13T00:23:59.647553516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 646.737817ms" Sep 13 00:23:59.649021 containerd[1465]: time="2025-09-13T00:23:59.648986201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.650457 containerd[1465]: time="2025-09-13T00:23:59.650403779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.011723ms" Sep 13 00:23:59.782347 kubelet[2135]: W0913 00:23:59.782232 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.134.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:59.782347 kubelet[2135]: E0913 00:23:59.782300 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.134.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:59.801864 containerd[1465]: time="2025-09-13T00:23:59.801650752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:59.803697 containerd[1465]: time="2025-09-13T00:23:59.803495704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:59.803697 containerd[1465]: time="2025-09-13T00:23:59.803534837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.803697 containerd[1465]: time="2025-09-13T00:23:59.803625850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.804643 containerd[1465]: time="2025-09-13T00:23:59.804104347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:59.804643 containerd[1465]: time="2025-09-13T00:23:59.804163428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:59.804643 containerd[1465]: time="2025-09-13T00:23:59.804178887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.804643 containerd[1465]: time="2025-09-13T00:23:59.804249871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.817635 containerd[1465]: time="2025-09-13T00:23:59.817068332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:23:59.817635 containerd[1465]: time="2025-09-13T00:23:59.817131148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:23:59.817635 containerd[1465]: time="2025-09-13T00:23:59.817143560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.823513 containerd[1465]: time="2025-09-13T00:23:59.822295084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:23:59.839656 systemd[1]: Started cri-containerd-64aa942026c94c72f1892f467ddf33bcfbdcc854bf7f7fdbd90be572e733d88c.scope - libcontainer container 64aa942026c94c72f1892f467ddf33bcfbdcc854bf7f7fdbd90be572e733d88c. Sep 13 00:23:59.845567 systemd[1]: Started cri-containerd-078bdac60543ec76d8940f92da8fdf56c28fb0d6310b62182ee4cfaaf3289d6c.scope - libcontainer container 078bdac60543ec76d8940f92da8fdf56c28fb0d6310b62182ee4cfaaf3289d6c. Sep 13 00:23:59.868720 systemd[1]: Started cri-containerd-270b0d92fd5abca1a08bf36965da884e3c5a879f612360110b6a879f6009c7d6.scope - libcontainer container 270b0d92fd5abca1a08bf36965da884e3c5a879f612360110b6a879f6009c7d6. Sep 13 00:23:59.928857 kubelet[2135]: E0913 00:23:59.928364 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.134.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-9b8e9ee716?timeout=10s\": dial tcp 143.198.134.88:6443: connect: connection refused" interval="1.6s" Sep 13 00:23:59.949552 containerd[1465]: time="2025-09-13T00:23:59.949507625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-9b8e9ee716,Uid:699d746037ca31db9a67c638e21ca7c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"64aa942026c94c72f1892f467ddf33bcfbdcc854bf7f7fdbd90be572e733d88c\"" Sep 13 00:23:59.953346 containerd[1465]: time="2025-09-13T00:23:59.952970738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-9b8e9ee716,Uid:183f4ed0315d4889c6428fde441ec5fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"270b0d92fd5abca1a08bf36965da884e3c5a879f612360110b6a879f6009c7d6\"" Sep 13 00:23:59.953346 containerd[1465]: time="2025-09-13T00:23:59.953078284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-9b8e9ee716,Uid:08acef6c8bb86972a9f7965fb363233a,Namespace:kube-system,Attempt:0,} returns sandbox id \"078bdac60543ec76d8940f92da8fdf56c28fb0d6310b62182ee4cfaaf3289d6c\"" Sep 13 00:23:59.956902 kubelet[2135]: E0913 00:23:59.956589 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:23:59.956902 kubelet[2135]: E0913 00:23:59.956648 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:23:59.956902 kubelet[2135]: E0913 00:23:59.956601 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:23:59.961488 containerd[1465]: time="2025-09-13T00:23:59.960292971Z" level=info msg="CreateContainer within sandbox \"078bdac60543ec76d8940f92da8fdf56c28fb0d6310b62182ee4cfaaf3289d6c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:23:59.963262 containerd[1465]: time="2025-09-13T00:23:59.963051061Z" level=info msg="CreateContainer within sandbox \"270b0d92fd5abca1a08bf36965da884e3c5a879f612360110b6a879f6009c7d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:23:59.964811 containerd[1465]: time="2025-09-13T00:23:59.964519307Z" level=info msg="CreateContainer within sandbox \"64aa942026c94c72f1892f467ddf33bcfbdcc854bf7f7fdbd90be572e733d88c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:23:59.982705 containerd[1465]: time="2025-09-13T00:23:59.982645334Z" level=info msg="CreateContainer within sandbox \"64aa942026c94c72f1892f467ddf33bcfbdcc854bf7f7fdbd90be572e733d88c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93ba18f3dd2933cc8a0a8c7c943206af1468f66e6e7f16de7c5fec1dea768fc6\"" Sep 13 00:23:59.984316 kubelet[2135]: W0913 00:23:59.984257 2135 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.134.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.134.88:6443: connect: connection refused Sep 13 00:23:59.984485 containerd[1465]: time="2025-09-13T00:23:59.983368437Z" level=info msg="CreateContainer within sandbox \"078bdac60543ec76d8940f92da8fdf56c28fb0d6310b62182ee4cfaaf3289d6c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"67545aed7127d4ad64573d7a662f887069a5c88c7432febe7013d882700581a9\"" Sep 13 00:23:59.985499 containerd[1465]: time="2025-09-13T00:23:59.983707323Z" level=info msg="StartContainer for \"93ba18f3dd2933cc8a0a8c7c943206af1468f66e6e7f16de7c5fec1dea768fc6\"" Sep 13 00:23:59.985750 kubelet[2135]: E0913 00:23:59.985414 2135 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.134.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.134.88:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:23:59.987099 containerd[1465]: time="2025-09-13T00:23:59.987073840Z" level=info msg="StartContainer for \"67545aed7127d4ad64573d7a662f887069a5c88c7432febe7013d882700581a9\"" Sep 13 00:23:59.991147 containerd[1465]: time="2025-09-13T00:23:59.990769624Z" level=info msg="CreateContainer within sandbox \"270b0d92fd5abca1a08bf36965da884e3c5a879f612360110b6a879f6009c7d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"48f98e50cac7b7b3a074dc296ae100fa88dcb46f6401820d272f75445c6802c5\"" Sep 13 00:23:59.991378 containerd[1465]: time="2025-09-13T00:23:59.991280462Z" level=info msg="StartContainer for \"48f98e50cac7b7b3a074dc296ae100fa88dcb46f6401820d272f75445c6802c5\"" Sep 13 00:24:00.048050 systemd[1]: Started cri-containerd-67545aed7127d4ad64573d7a662f887069a5c88c7432febe7013d882700581a9.scope - libcontainer container 67545aed7127d4ad64573d7a662f887069a5c88c7432febe7013d882700581a9. Sep 13 00:24:00.067392 systemd[1]: Started cri-containerd-48f98e50cac7b7b3a074dc296ae100fa88dcb46f6401820d272f75445c6802c5.scope - libcontainer container 48f98e50cac7b7b3a074dc296ae100fa88dcb46f6401820d272f75445c6802c5. Sep 13 00:24:00.070351 systemd[1]: Started cri-containerd-93ba18f3dd2933cc8a0a8c7c943206af1468f66e6e7f16de7c5fec1dea768fc6.scope - libcontainer container 93ba18f3dd2933cc8a0a8c7c943206af1468f66e6e7f16de7c5fec1dea768fc6. Sep 13 00:24:00.120558 kubelet[2135]: I0913 00:24:00.120382 2135 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:00.121340 kubelet[2135]: E0913 00:24:00.121284 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.134.88:6443/api/v1/nodes\": dial tcp 143.198.134.88:6443: connect: connection refused" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:00.167800 containerd[1465]: time="2025-09-13T00:24:00.167631701Z" level=info msg="StartContainer for \"48f98e50cac7b7b3a074dc296ae100fa88dcb46f6401820d272f75445c6802c5\" returns successfully" Sep 13 00:24:00.190970 containerd[1465]: time="2025-09-13T00:24:00.189663793Z" level=info msg="StartContainer for \"93ba18f3dd2933cc8a0a8c7c943206af1468f66e6e7f16de7c5fec1dea768fc6\" returns successfully" Sep 13 00:24:00.190970 containerd[1465]: time="2025-09-13T00:24:00.189664207Z" level=info msg="StartContainer for \"67545aed7127d4ad64573d7a662f887069a5c88c7432febe7013d882700581a9\" returns successfully" Sep 13 00:24:00.578759 kubelet[2135]: E0913 00:24:00.576954 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:00.578759 kubelet[2135]: E0913 00:24:00.577099 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:00.584601 kubelet[2135]: E0913 00:24:00.582773 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:00.584601 kubelet[2135]: E0913 00:24:00.582925 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:00.594464 kubelet[2135]: E0913 00:24:00.590280 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:00.594464 kubelet[2135]: E0913 00:24:00.590416 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:01.592476 kubelet[2135]: E0913 00:24:01.591886 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:01.592476 kubelet[2135]: E0913 00:24:01.592166 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:01.593709 kubelet[2135]: E0913 00:24:01.593422 2135 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:01.593709 kubelet[2135]: E0913 00:24:01.593615 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:01.723697 kubelet[2135]: I0913 00:24:01.722804 2135 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.143533 kubelet[2135]: E0913 00:24:02.143453 2135 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-9b8e9ee716\" not found" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.324015 kubelet[2135]: I0913 00:24:02.323397 2135 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.325042 kubelet[2135]: I0913 00:24:02.325010 2135 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.345133 kubelet[2135]: E0913 00:24:02.344293 2135 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.345133 kubelet[2135]: I0913 00:24:02.344338 2135 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.347189 kubelet[2135]: E0913 00:24:02.346912 2135 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-9b8e9ee716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.347189 kubelet[2135]: I0913 00:24:02.346950 2135 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.352111 kubelet[2135]: E0913 00:24:02.352052 2135 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.507565 kubelet[2135]: I0913 00:24:02.507061 2135 apiserver.go:52] "Watching apiserver" Sep 13 00:24:02.524658 kubelet[2135]: I0913 00:24:02.524608 2135 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:24:02.592122 kubelet[2135]: I0913 00:24:02.592088 2135 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.594905 kubelet[2135]: E0913 00:24:02.594856 2135 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-9b8e9ee716\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:02.595510 kubelet[2135]: E0913 00:24:02.595084 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:04.452284 systemd[1]: Reloading requested from client PID 2408 ('systemctl') (unit session-7.scope)... Sep 13 00:24:04.452304 systemd[1]: Reloading... Sep 13 00:24:04.560491 zram_generator::config[2447]: No configuration found. Sep 13 00:24:04.710152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:24:04.809767 systemd[1]: Reloading finished in 356 ms. Sep 13 00:24:04.863969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:04.878274 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:24:04.878547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:04.886508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:05.100107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:05.110160 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:24:05.182555 kubelet[2498]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:24:05.182555 kubelet[2498]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:24:05.182555 kubelet[2498]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:24:05.183051 kubelet[2498]: I0913 00:24:05.182628 2498 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:24:05.192467 kubelet[2498]: I0913 00:24:05.192403 2498 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:24:05.192592 kubelet[2498]: I0913 00:24:05.192489 2498 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:24:05.192852 kubelet[2498]: I0913 00:24:05.192832 2498 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:24:05.195679 kubelet[2498]: I0913 00:24:05.195641 2498 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:24:05.202047 kubelet[2498]: I0913 00:24:05.201991 2498 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:24:05.205883 kubelet[2498]: E0913 00:24:05.205816 2498 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:24:05.205883 kubelet[2498]: I0913 00:24:05.205846 2498 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:24:05.213500 kubelet[2498]: I0913 00:24:05.213253 2498 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:24:05.213637 kubelet[2498]: I0913 00:24:05.213552 2498 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:24:05.213763 kubelet[2498]: I0913 00:24:05.213590 2498 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-9b8e9ee716","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:24:05.213863 kubelet[2498]: I0913 00:24:05.213769 2498 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:24:05.213863 kubelet[2498]: I0913 00:24:05.213779 2498 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:24:05.213863 kubelet[2498]: I0913 00:24:05.213829 2498 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:24:05.213997 kubelet[2498]: I0913 00:24:05.213980 2498 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:24:05.214038 kubelet[2498]: I0913 00:24:05.214006 2498 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:24:05.214038 kubelet[2498]: I0913 00:24:05.214027 2498 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:24:05.214095 kubelet[2498]: I0913 00:24:05.214038 2498 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:24:05.217487 kubelet[2498]: I0913 00:24:05.216134 2498 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:24:05.217487 kubelet[2498]: I0913 00:24:05.216531 2498 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:24:05.217487 kubelet[2498]: I0913 00:24:05.217023 2498 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:24:05.217487 kubelet[2498]: I0913 00:24:05.217050 2498 server.go:1287] "Started kubelet" Sep 13 00:24:05.220810 kubelet[2498]: I0913 00:24:05.220781 2498 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:24:05.229260 kubelet[2498]: I0913 00:24:05.229213 2498 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:24:05.230552 kubelet[2498]: I0913 00:24:05.230529 2498 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:24:05.231821 kubelet[2498]: I0913 00:24:05.231701 2498 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:24:05.232082 kubelet[2498]: I0913 00:24:05.232069 2498 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:24:05.232400 kubelet[2498]: I0913 00:24:05.232382 2498 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:24:05.234321 kubelet[2498]: I0913 00:24:05.234302 2498 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:24:05.234680 kubelet[2498]: E0913 00:24:05.234660 2498 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-9b8e9ee716\" not found" Sep 13 00:24:05.236858 kubelet[2498]: I0913 00:24:05.236837 2498 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:24:05.237058 kubelet[2498]: I0913 00:24:05.237046 2498 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:24:05.239553 kubelet[2498]: I0913 00:24:05.239513 2498 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:24:05.241101 kubelet[2498]: I0913 00:24:05.241072 2498 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:24:05.241233 kubelet[2498]: I0913 00:24:05.241222 2498 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:24:05.241296 kubelet[2498]: I0913 00:24:05.241288 2498 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:24:05.241361 kubelet[2498]: I0913 00:24:05.241354 2498 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:24:05.241545 kubelet[2498]: E0913 00:24:05.241528 2498 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:24:05.243307 kubelet[2498]: I0913 00:24:05.243260 2498 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:24:05.243413 kubelet[2498]: I0913 00:24:05.243372 2498 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:24:05.253641 kubelet[2498]: I0913 00:24:05.253606 2498 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:24:05.311837 kubelet[2498]: I0913 00:24:05.311776 2498 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:24:05.311837 kubelet[2498]: I0913 00:24:05.311800 2498 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:24:05.311837 kubelet[2498]: I0913 00:24:05.311831 2498 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:24:05.312180 kubelet[2498]: I0913 00:24:05.312004 2498 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:24:05.312180 kubelet[2498]: I0913 00:24:05.312015 2498 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:24:05.312180 kubelet[2498]: I0913 00:24:05.312033 2498 policy_none.go:49] "None policy: Start" Sep 13 00:24:05.312180 kubelet[2498]: I0913 00:24:05.312044 2498 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:24:05.312180 kubelet[2498]: I0913 00:24:05.312053 2498 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:24:05.312180 kubelet[2498]: I0913 00:24:05.312156 2498 state_mem.go:75] "Updated machine memory state" Sep 13 00:24:05.317298 kubelet[2498]: I0913 00:24:05.317233 2498 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:24:05.317509 kubelet[2498]: I0913 00:24:05.317490 2498 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:24:05.317549 kubelet[2498]: I0913 00:24:05.317508 2498 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:24:05.318012 kubelet[2498]: I0913 00:24:05.317995 2498 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:24:05.322268 kubelet[2498]: E0913 00:24:05.322239 2498 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:24:05.343923 kubelet[2498]: I0913 00:24:05.343049 2498 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.343923 kubelet[2498]: I0913 00:24:05.343519 2498 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.344173 kubelet[2498]: I0913 00:24:05.344160 2498 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.352555 kubelet[2498]: W0913 00:24:05.352235 2498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:24:05.355615 kubelet[2498]: W0913 00:24:05.355551 2498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:24:05.355974 kubelet[2498]: W0913 00:24:05.355961 2498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:24:05.419621 kubelet[2498]: I0913 00:24:05.419592 2498 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.435165 kubelet[2498]: I0913 00:24:05.434238 2498 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.435165 kubelet[2498]: I0913 00:24:05.434361 2498 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.538762 kubelet[2498]: I0913 00:24:05.538713 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539101 kubelet[2498]: I0913 00:24:05.539080 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539220 kubelet[2498]: I0913 00:24:05.539205 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/183f4ed0315d4889c6428fde441ec5fb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" (UID: \"183f4ed0315d4889c6428fde441ec5fb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539313 kubelet[2498]: I0913 00:24:05.539301 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/183f4ed0315d4889c6428fde441ec5fb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" (UID: \"183f4ed0315d4889c6428fde441ec5fb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539453 kubelet[2498]: I0913 00:24:05.539408 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539581 kubelet[2498]: I0913 00:24:05.539530 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539581 kubelet[2498]: I0913 00:24:05.539557 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08acef6c8bb86972a9f7965fb363233a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-9b8e9ee716\" (UID: \"08acef6c8bb86972a9f7965fb363233a\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539905 kubelet[2498]: I0913 00:24:05.539727 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/183f4ed0315d4889c6428fde441ec5fb-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" (UID: \"183f4ed0315d4889c6428fde441ec5fb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.539905 kubelet[2498]: I0913 00:24:05.539868 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/699d746037ca31db9a67c638e21ca7c7-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-9b8e9ee716\" (UID: \"699d746037ca31db9a67c638e21ca7c7\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:05.653926 kubelet[2498]: E0913 00:24:05.653869 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:05.656898 kubelet[2498]: E0913 00:24:05.656750 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:05.656898 kubelet[2498]: E0913 00:24:05.656854 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:06.224658 kubelet[2498]: I0913 00:24:06.224217 2498 apiserver.go:52] "Watching apiserver" Sep 13 00:24:06.237139 kubelet[2498]: I0913 00:24:06.237063 2498 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:24:06.289447 kubelet[2498]: I0913 00:24:06.288948 2498 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:06.289447 kubelet[2498]: E0913 00:24:06.289127 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:06.289707 kubelet[2498]: E0913 00:24:06.289683 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:06.303296 kubelet[2498]: W0913 00:24:06.303255 2498 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:24:06.303493 kubelet[2498]: E0913 00:24:06.303325 2498 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-9b8e9ee716\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:06.303556 kubelet[2498]: E0913 00:24:06.303536 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:06.339459 kubelet[2498]: I0913 00:24:06.338703 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-9b8e9ee716" podStartSLOduration=1.338671893 podStartE2EDuration="1.338671893s" podCreationTimestamp="2025-09-13 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:06.323777647 +0000 UTC m=+1.205846808" watchObservedRunningTime="2025-09-13 00:24:06.338671893 +0000 UTC m=+1.220741051" Sep 13 00:24:06.351635 kubelet[2498]: I0913 00:24:06.351561 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-9b8e9ee716" podStartSLOduration=1.351539026 podStartE2EDuration="1.351539026s" podCreationTimestamp="2025-09-13 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:06.339811276 +0000 UTC m=+1.221880443" watchObservedRunningTime="2025-09-13 00:24:06.351539026 +0000 UTC m=+1.233608184" Sep 13 00:24:07.291252 kubelet[2498]: E0913 00:24:07.290880 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:07.291252 kubelet[2498]: E0913 00:24:07.291076 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:08.292637 kubelet[2498]: E0913 00:24:08.292531 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:10.380505 kubelet[2498]: I0913 00:24:10.380149 2498 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:24:10.381747 kubelet[2498]: I0913 00:24:10.380915 2498 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:24:10.381823 containerd[1465]: time="2025-09-13T00:24:10.380624512Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:24:11.372212 kubelet[2498]: I0913 00:24:11.372097 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-9b8e9ee716" podStartSLOduration=6.372076623 podStartE2EDuration="6.372076623s" podCreationTimestamp="2025-09-13 00:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:06.352601607 +0000 UTC m=+1.234670785" watchObservedRunningTime="2025-09-13 00:24:11.372076623 +0000 UTC m=+6.254145790" Sep 13 00:24:11.387582 systemd[1]: Created slice kubepods-besteffort-podf85f1d54_baae_46bb_9dba_844a15ab368b.slice - libcontainer container kubepods-besteffort-podf85f1d54_baae_46bb_9dba_844a15ab368b.slice. Sep 13 00:24:11.391655 kubelet[2498]: W0913 00:24:11.390282 2498 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85f1d54_baae_46bb_9dba_844a15ab368b.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85f1d54_baae_46bb_9dba_844a15ab368b.slice/cpuset.cpus.effective: no such device Sep 13 00:24:11.478324 kubelet[2498]: I0913 00:24:11.478143 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f85f1d54-baae-46bb-9dba-844a15ab368b-xtables-lock\") pod \"kube-proxy-jxqhx\" (UID: \"f85f1d54-baae-46bb-9dba-844a15ab368b\") " pod="kube-system/kube-proxy-jxqhx" Sep 13 00:24:11.478324 kubelet[2498]: I0913 00:24:11.478191 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpv7n\" (UniqueName: \"kubernetes.io/projected/f85f1d54-baae-46bb-9dba-844a15ab368b-kube-api-access-vpv7n\") pod \"kube-proxy-jxqhx\" (UID: \"f85f1d54-baae-46bb-9dba-844a15ab368b\") " pod="kube-system/kube-proxy-jxqhx" Sep 13 00:24:11.478324 kubelet[2498]: I0913 00:24:11.478225 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f85f1d54-baae-46bb-9dba-844a15ab368b-lib-modules\") pod \"kube-proxy-jxqhx\" (UID: \"f85f1d54-baae-46bb-9dba-844a15ab368b\") " pod="kube-system/kube-proxy-jxqhx" Sep 13 00:24:11.478324 kubelet[2498]: I0913 00:24:11.478244 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f85f1d54-baae-46bb-9dba-844a15ab368b-kube-proxy\") pod \"kube-proxy-jxqhx\" (UID: \"f85f1d54-baae-46bb-9dba-844a15ab368b\") " pod="kube-system/kube-proxy-jxqhx" Sep 13 00:24:11.575459 systemd[1]: Created slice kubepods-besteffort-pod6c00857f_dfd5_475a_bd29_35e6fc4b7157.slice - libcontainer container kubepods-besteffort-pod6c00857f_dfd5_475a_bd29_35e6fc4b7157.slice. Sep 13 00:24:11.578700 kubelet[2498]: I0913 00:24:11.578660 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c00857f-dfd5-475a-bd29-35e6fc4b7157-var-lib-calico\") pod \"tigera-operator-755d956888-vfzr7\" (UID: \"6c00857f-dfd5-475a-bd29-35e6fc4b7157\") " pod="tigera-operator/tigera-operator-755d956888-vfzr7" Sep 13 00:24:11.578831 kubelet[2498]: I0913 00:24:11.578728 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbqx\" (UniqueName: \"kubernetes.io/projected/6c00857f-dfd5-475a-bd29-35e6fc4b7157-kube-api-access-gjbqx\") pod \"tigera-operator-755d956888-vfzr7\" (UID: \"6c00857f-dfd5-475a-bd29-35e6fc4b7157\") " pod="tigera-operator/tigera-operator-755d956888-vfzr7" Sep 13 00:24:11.695846 kubelet[2498]: E0913 00:24:11.693853 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:11.695973 containerd[1465]: time="2025-09-13T00:24:11.694789350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxqhx,Uid:f85f1d54-baae-46bb-9dba-844a15ab368b,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:11.722473 containerd[1465]: time="2025-09-13T00:24:11.722246540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:11.722473 containerd[1465]: time="2025-09-13T00:24:11.722304947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:11.722473 containerd[1465]: time="2025-09-13T00:24:11.722315837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:11.722654 containerd[1465]: time="2025-09-13T00:24:11.722412064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:11.749794 systemd[1]: Started cri-containerd-217f4a0c3b1e01f0c22ec32e8fd613c9ddac7333d3bfa605f12e8e3403fffcfa.scope - libcontainer container 217f4a0c3b1e01f0c22ec32e8fd613c9ddac7333d3bfa605f12e8e3403fffcfa. Sep 13 00:24:11.778998 containerd[1465]: time="2025-09-13T00:24:11.778933376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxqhx,Uid:f85f1d54-baae-46bb-9dba-844a15ab368b,Namespace:kube-system,Attempt:0,} returns sandbox id \"217f4a0c3b1e01f0c22ec32e8fd613c9ddac7333d3bfa605f12e8e3403fffcfa\"" Sep 13 00:24:11.780973 kubelet[2498]: E0913 00:24:11.780777 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:11.784926 containerd[1465]: time="2025-09-13T00:24:11.784671025Z" level=info msg="CreateContainer within sandbox \"217f4a0c3b1e01f0c22ec32e8fd613c9ddac7333d3bfa605f12e8e3403fffcfa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:24:11.796609 containerd[1465]: time="2025-09-13T00:24:11.796558290Z" level=info msg="CreateContainer within sandbox \"217f4a0c3b1e01f0c22ec32e8fd613c9ddac7333d3bfa605f12e8e3403fffcfa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5da62b3dcac92469828627c779dbc79f825374df7d7923fbda7101b7d80de310\"" Sep 13 00:24:11.799780 containerd[1465]: time="2025-09-13T00:24:11.799164989Z" level=info msg="StartContainer for \"5da62b3dcac92469828627c779dbc79f825374df7d7923fbda7101b7d80de310\"" Sep 13 00:24:11.831734 systemd[1]: Started cri-containerd-5da62b3dcac92469828627c779dbc79f825374df7d7923fbda7101b7d80de310.scope - libcontainer container 5da62b3dcac92469828627c779dbc79f825374df7d7923fbda7101b7d80de310. Sep 13 00:24:11.863727 containerd[1465]: time="2025-09-13T00:24:11.863685718Z" level=info msg="StartContainer for \"5da62b3dcac92469828627c779dbc79f825374df7d7923fbda7101b7d80de310\" returns successfully" Sep 13 00:24:11.883095 containerd[1465]: time="2025-09-13T00:24:11.883044051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vfzr7,Uid:6c00857f-dfd5-475a-bd29-35e6fc4b7157,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:24:11.914577 containerd[1465]: time="2025-09-13T00:24:11.912154661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:11.914577 containerd[1465]: time="2025-09-13T00:24:11.912264677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:11.914577 containerd[1465]: time="2025-09-13T00:24:11.912286918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:11.914577 containerd[1465]: time="2025-09-13T00:24:11.912407045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:11.940881 systemd[1]: Started cri-containerd-c9ac3b009e34d3e0fb7c3a3f19d2c1ffd6290ae86bef10e20edd14b3fc04746e.scope - libcontainer container c9ac3b009e34d3e0fb7c3a3f19d2c1ffd6290ae86bef10e20edd14b3fc04746e. Sep 13 00:24:12.007923 containerd[1465]: time="2025-09-13T00:24:12.007798747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-vfzr7,Uid:6c00857f-dfd5-475a-bd29-35e6fc4b7157,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c9ac3b009e34d3e0fb7c3a3f19d2c1ffd6290ae86bef10e20edd14b3fc04746e\"" Sep 13 00:24:12.010672 containerd[1465]: time="2025-09-13T00:24:12.010642352Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:24:12.307815 kubelet[2498]: E0913 00:24:12.307519 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:12.446943 systemd-timesyncd[1348]: Contacted time server 155.248.196.28:123 (2.flatcar.pool.ntp.org). Sep 13 00:24:12.447049 systemd-timesyncd[1348]: Initial clock synchronization to Sat 2025-09-13 00:24:12.375246 UTC. Sep 13 00:24:13.231198 kubelet[2498]: E0913 00:24:13.230818 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:13.263145 kubelet[2498]: I0913 00:24:13.263075 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxqhx" podStartSLOduration=2.263051795 podStartE2EDuration="2.263051795s" podCreationTimestamp="2025-09-13 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:12.324833707 +0000 UTC m=+7.206902874" watchObservedRunningTime="2025-09-13 00:24:13.263051795 +0000 UTC m=+8.145120963" Sep 13 00:24:13.308888 kubelet[2498]: E0913 00:24:13.308824 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:14.171048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559134579.mount: Deactivated successfully. Sep 13 00:24:15.276115 containerd[1465]: time="2025-09-13T00:24:15.276058280Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:15.277308 containerd[1465]: time="2025-09-13T00:24:15.277158401Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:24:15.278106 containerd[1465]: time="2025-09-13T00:24:15.277699683Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:15.280551 containerd[1465]: time="2025-09-13T00:24:15.280510823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:15.282004 containerd[1465]: time="2025-09-13T00:24:15.281944960Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.270219175s" Sep 13 00:24:15.282004 containerd[1465]: time="2025-09-13T00:24:15.282003832Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:24:15.286972 containerd[1465]: time="2025-09-13T00:24:15.286914750Z" level=info msg="CreateContainer within sandbox \"c9ac3b009e34d3e0fb7c3a3f19d2c1ffd6290ae86bef10e20edd14b3fc04746e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:24:15.305024 containerd[1465]: time="2025-09-13T00:24:15.304958162Z" level=info msg="CreateContainer within sandbox \"c9ac3b009e34d3e0fb7c3a3f19d2c1ffd6290ae86bef10e20edd14b3fc04746e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7f47cd510e6e5abc8b5e2cb7231d5495c4abe862232f6ed50a1c2040a640f0fd\"" Sep 13 00:24:15.306114 containerd[1465]: time="2025-09-13T00:24:15.305806868Z" level=info msg="StartContainer for \"7f47cd510e6e5abc8b5e2cb7231d5495c4abe862232f6ed50a1c2040a640f0fd\"" Sep 13 00:24:15.360711 systemd[1]: Started cri-containerd-7f47cd510e6e5abc8b5e2cb7231d5495c4abe862232f6ed50a1c2040a640f0fd.scope - libcontainer container 7f47cd510e6e5abc8b5e2cb7231d5495c4abe862232f6ed50a1c2040a640f0fd. Sep 13 00:24:15.404356 containerd[1465]: time="2025-09-13T00:24:15.404172355Z" level=info msg="StartContainer for \"7f47cd510e6e5abc8b5e2cb7231d5495c4abe862232f6ed50a1c2040a640f0fd\" returns successfully" Sep 13 00:24:15.628299 kubelet[2498]: E0913 00:24:15.627709 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:16.322350 kubelet[2498]: E0913 00:24:16.322241 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:17.807979 kubelet[2498]: E0913 00:24:17.806333 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:17.820023 kubelet[2498]: I0913 00:24:17.818985 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-vfzr7" podStartSLOduration=3.544794957 podStartE2EDuration="6.818966036s" podCreationTimestamp="2025-09-13 00:24:11 +0000 UTC" firstStartedPulling="2025-09-13 00:24:12.009413657 +0000 UTC m=+6.891482816" lastFinishedPulling="2025-09-13 00:24:15.283584738 +0000 UTC m=+10.165653895" observedRunningTime="2025-09-13 00:24:16.344079309 +0000 UTC m=+11.226148478" watchObservedRunningTime="2025-09-13 00:24:17.818966036 +0000 UTC m=+12.701035202" Sep 13 00:24:18.325397 kubelet[2498]: E0913 00:24:18.325042 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:20.650567 update_engine[1450]: I20250913 00:24:20.650264 1450 update_attempter.cc:509] Updating boot flags... Sep 13 00:24:20.683517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2874) Sep 13 00:24:20.748463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2878) Sep 13 00:24:21.906551 sudo[1654]: pam_unix(sudo:session): session closed for user root Sep 13 00:24:21.912029 sshd[1651]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:21.919160 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:24:21.919962 systemd[1]: sshd@6-143.198.134.88:22-139.178.68.195:57260.service: Deactivated successfully. Sep 13 00:24:21.924765 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:24:21.924941 systemd[1]: session-7.scope: Consumed 4.942s CPU time, 141.8M memory peak, 0B memory swap peak. Sep 13 00:24:21.927945 systemd-logind[1449]: Removed session 7. Sep 13 00:24:26.141400 systemd[1]: Created slice kubepods-besteffort-pod92fd9a58_4809_4fe3_92ce_2bda061ffe0d.slice - libcontainer container kubepods-besteffort-pod92fd9a58_4809_4fe3_92ce_2bda061ffe0d.slice. Sep 13 00:24:26.265365 kubelet[2498]: I0913 00:24:26.265267 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f789\" (UniqueName: \"kubernetes.io/projected/92fd9a58-4809-4fe3-92ce-2bda061ffe0d-kube-api-access-9f789\") pod \"calico-typha-8d4987896-gvt69\" (UID: \"92fd9a58-4809-4fe3-92ce-2bda061ffe0d\") " pod="calico-system/calico-typha-8d4987896-gvt69" Sep 13 00:24:26.265365 kubelet[2498]: I0913 00:24:26.265351 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92fd9a58-4809-4fe3-92ce-2bda061ffe0d-tigera-ca-bundle\") pod \"calico-typha-8d4987896-gvt69\" (UID: \"92fd9a58-4809-4fe3-92ce-2bda061ffe0d\") " pod="calico-system/calico-typha-8d4987896-gvt69" Sep 13 00:24:26.265365 kubelet[2498]: I0913 00:24:26.265369 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/92fd9a58-4809-4fe3-92ce-2bda061ffe0d-typha-certs\") pod \"calico-typha-8d4987896-gvt69\" (UID: \"92fd9a58-4809-4fe3-92ce-2bda061ffe0d\") " pod="calico-system/calico-typha-8d4987896-gvt69" Sep 13 00:24:26.414786 systemd[1]: Created slice kubepods-besteffort-podeeb0e0a9_98f6_4668_9e69_f4dd211731f9.slice - libcontainer container kubepods-besteffort-podeeb0e0a9_98f6_4668_9e69_f4dd211731f9.slice. Sep 13 00:24:26.452138 kubelet[2498]: E0913 00:24:26.451954 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:26.460726 containerd[1465]: time="2025-09-13T00:24:26.460643416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d4987896-gvt69,Uid:92fd9a58-4809-4fe3-92ce-2bda061ffe0d,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:26.507984 containerd[1465]: time="2025-09-13T00:24:26.507847246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:26.507984 containerd[1465]: time="2025-09-13T00:24:26.507953221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:26.509072 containerd[1465]: time="2025-09-13T00:24:26.508861131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:26.509072 containerd[1465]: time="2025-09-13T00:24:26.509011043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:26.538696 systemd[1]: Started cri-containerd-0de1d10af96ee00c1b4f551aa6cce32d594d5321460d6a6f5aa13925dc24ce42.scope - libcontainer container 0de1d10af96ee00c1b4f551aa6cce32d594d5321460d6a6f5aa13925dc24ce42. Sep 13 00:24:26.567696 kubelet[2498]: I0913 00:24:26.567650 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-cni-bin-dir\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569733 kubelet[2498]: I0913 00:24:26.569538 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-cni-log-dir\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569733 kubelet[2498]: I0913 00:24:26.569587 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-cni-net-dir\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569733 kubelet[2498]: I0913 00:24:26.569606 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-var-run-calico\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569733 kubelet[2498]: I0913 00:24:26.569643 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-lib-modules\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569733 kubelet[2498]: I0913 00:24:26.569663 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-policysync\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569988 kubelet[2498]: I0913 00:24:26.569722 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-tigera-ca-bundle\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569988 kubelet[2498]: I0913 00:24:26.569754 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6tt\" (UniqueName: \"kubernetes.io/projected/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-kube-api-access-rs6tt\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569988 kubelet[2498]: I0913 00:24:26.569783 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-var-lib-calico\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569988 kubelet[2498]: I0913 00:24:26.569801 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-xtables-lock\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.569988 kubelet[2498]: I0913 00:24:26.569818 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-node-certs\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.570122 kubelet[2498]: I0913 00:24:26.569839 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eeb0e0a9-98f6-4668-9e69-f4dd211731f9-flexvol-driver-host\") pod \"calico-node-vbqd7\" (UID: \"eeb0e0a9-98f6-4668-9e69-f4dd211731f9\") " pod="calico-system/calico-node-vbqd7" Sep 13 00:24:26.605573 containerd[1465]: time="2025-09-13T00:24:26.605528441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d4987896-gvt69,Uid:92fd9a58-4809-4fe3-92ce-2bda061ffe0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0de1d10af96ee00c1b4f551aa6cce32d594d5321460d6a6f5aa13925dc24ce42\"" Sep 13 00:24:26.611506 kubelet[2498]: E0913 00:24:26.611395 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:26.617661 containerd[1465]: time="2025-09-13T00:24:26.617624640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:24:26.656773 kubelet[2498]: E0913 00:24:26.656609 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:26.677250 kubelet[2498]: E0913 00:24:26.677135 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.677527 kubelet[2498]: W0913 00:24:26.677400 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.679662 kubelet[2498]: E0913 00:24:26.679600 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.680668 kubelet[2498]: E0913 00:24:26.680053 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.680668 kubelet[2498]: W0913 00:24:26.680557 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.680820 kubelet[2498]: E0913 00:24:26.680771 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.683755 kubelet[2498]: E0913 00:24:26.683221 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.683755 kubelet[2498]: W0913 00:24:26.683246 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.687135 kubelet[2498]: E0913 00:24:26.686531 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.688582 kubelet[2498]: W0913 00:24:26.688530 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.689300 kubelet[2498]: E0913 00:24:26.686700 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.689300 kubelet[2498]: E0913 00:24:26.688853 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.690204 kubelet[2498]: E0913 00:24:26.689853 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.690204 kubelet[2498]: W0913 00:24:26.689871 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.690204 kubelet[2498]: E0913 00:24:26.689913 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.691049 kubelet[2498]: E0913 00:24:26.690828 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.691049 kubelet[2498]: W0913 00:24:26.690845 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.691049 kubelet[2498]: E0913 00:24:26.690886 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.691534 kubelet[2498]: E0913 00:24:26.691518 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.692505 kubelet[2498]: W0913 00:24:26.692481 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.692709 kubelet[2498]: E0913 00:24:26.692611 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.692902 kubelet[2498]: E0913 00:24:26.692873 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.692902 kubelet[2498]: W0913 00:24:26.692886 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.693094 kubelet[2498]: E0913 00:24:26.693043 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.693244 kubelet[2498]: E0913 00:24:26.693232 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.694453 kubelet[2498]: W0913 00:24:26.693322 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.694453 kubelet[2498]: E0913 00:24:26.693368 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.694893 kubelet[2498]: E0913 00:24:26.694789 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.694893 kubelet[2498]: W0913 00:24:26.694805 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.694893 kubelet[2498]: E0913 00:24:26.694840 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.695113 kubelet[2498]: E0913 00:24:26.695060 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.695113 kubelet[2498]: W0913 00:24:26.695070 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.695113 kubelet[2498]: E0913 00:24:26.695097 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.695456 kubelet[2498]: E0913 00:24:26.695390 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.695456 kubelet[2498]: W0913 00:24:26.695402 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.695456 kubelet[2498]: E0913 00:24:26.695444 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.696571 kubelet[2498]: E0913 00:24:26.696507 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.696571 kubelet[2498]: W0913 00:24:26.696521 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.696571 kubelet[2498]: E0913 00:24:26.696559 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.696999 kubelet[2498]: E0913 00:24:26.696911 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.696999 kubelet[2498]: W0913 00:24:26.696927 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.696999 kubelet[2498]: E0913 00:24:26.696961 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.698961 kubelet[2498]: E0913 00:24:26.698846 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.698961 kubelet[2498]: W0913 00:24:26.698875 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.698961 kubelet[2498]: E0913 00:24:26.698918 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.699385 kubelet[2498]: E0913 00:24:26.699255 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.699385 kubelet[2498]: W0913 00:24:26.699273 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.700502 kubelet[2498]: E0913 00:24:26.699546 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.700921 kubelet[2498]: E0913 00:24:26.700765 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.700921 kubelet[2498]: W0913 00:24:26.700784 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.700921 kubelet[2498]: E0913 00:24:26.700824 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.701179 kubelet[2498]: E0913 00:24:26.701167 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.701273 kubelet[2498]: W0913 00:24:26.701222 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.701308 kubelet[2498]: E0913 00:24:26.701271 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.702467 kubelet[2498]: E0913 00:24:26.701519 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.702467 kubelet[2498]: W0913 00:24:26.701530 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.702467 kubelet[2498]: E0913 00:24:26.701558 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.702872 kubelet[2498]: E0913 00:24:26.702811 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.702872 kubelet[2498]: W0913 00:24:26.702825 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.702872 kubelet[2498]: E0913 00:24:26.702858 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.703298 kubelet[2498]: E0913 00:24:26.703195 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.703298 kubelet[2498]: W0913 00:24:26.703206 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.703949 kubelet[2498]: E0913 00:24:26.703922 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.706570 kubelet[2498]: E0913 00:24:26.706526 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.706570 kubelet[2498]: W0913 00:24:26.706547 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.707199 kubelet[2498]: E0913 00:24:26.706866 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.715166 kubelet[2498]: E0913 00:24:26.715134 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.715499 kubelet[2498]: W0913 00:24:26.715466 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.715968 kubelet[2498]: E0913 00:24:26.715904 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.717459 kubelet[2498]: E0913 00:24:26.716362 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.717459 kubelet[2498]: W0913 00:24:26.716376 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.717622 kubelet[2498]: E0913 00:24:26.717602 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.717866 kubelet[2498]: E0913 00:24:26.717854 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.717970 kubelet[2498]: W0913 00:24:26.717916 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.718012 kubelet[2498]: E0913 00:24:26.717958 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.718303 kubelet[2498]: E0913 00:24:26.718267 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.718303 kubelet[2498]: W0913 00:24:26.718280 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.718527 kubelet[2498]: E0913 00:24:26.718515 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.718643 kubelet[2498]: E0913 00:24:26.718634 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.718770 kubelet[2498]: W0913 00:24:26.718681 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.718770 kubelet[2498]: E0913 00:24:26.718714 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.719503 kubelet[2498]: E0913 00:24:26.719487 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.719684 kubelet[2498]: W0913 00:24:26.719577 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.719684 kubelet[2498]: E0913 00:24:26.719594 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.720039 kubelet[2498]: E0913 00:24:26.719928 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.720109 kubelet[2498]: W0913 00:24:26.720097 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.721094 kubelet[2498]: E0913 00:24:26.720959 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.721276 kubelet[2498]: E0913 00:24:26.721265 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.721337 kubelet[2498]: W0913 00:24:26.721328 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.721453 kubelet[2498]: E0913 00:24:26.721379 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.722553 kubelet[2498]: E0913 00:24:26.722224 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.722553 kubelet[2498]: W0913 00:24:26.722237 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.722553 kubelet[2498]: E0913 00:24:26.722252 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.722912 kubelet[2498]: E0913 00:24:26.722821 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.722912 kubelet[2498]: W0913 00:24:26.722837 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.722912 kubelet[2498]: E0913 00:24:26.722854 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.724418 kubelet[2498]: E0913 00:24:26.723787 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.724418 kubelet[2498]: W0913 00:24:26.723804 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.724418 kubelet[2498]: E0913 00:24:26.723821 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.724823 kubelet[2498]: E0913 00:24:26.724731 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.724823 kubelet[2498]: W0913 00:24:26.724744 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.724823 kubelet[2498]: E0913 00:24:26.724756 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.725169 kubelet[2498]: E0913 00:24:26.725076 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.725169 kubelet[2498]: W0913 00:24:26.725087 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.725169 kubelet[2498]: E0913 00:24:26.725098 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.725317 kubelet[2498]: E0913 00:24:26.725308 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.725375 kubelet[2498]: W0913 00:24:26.725366 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.725421 kubelet[2498]: E0913 00:24:26.725412 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.726629 kubelet[2498]: E0913 00:24:26.726610 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.726828 kubelet[2498]: W0913 00:24:26.726706 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.726828 kubelet[2498]: E0913 00:24:26.726726 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.727067 kubelet[2498]: E0913 00:24:26.727022 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.727067 kubelet[2498]: W0913 00:24:26.727035 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.727067 kubelet[2498]: E0913 00:24:26.727047 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.727824 kubelet[2498]: E0913 00:24:26.727724 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.727824 kubelet[2498]: W0913 00:24:26.727740 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.727824 kubelet[2498]: E0913 00:24:26.727752 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.728484 kubelet[2498]: E0913 00:24:26.728371 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.728484 kubelet[2498]: W0913 00:24:26.728385 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.728484 kubelet[2498]: E0913 00:24:26.728397 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.730793 kubelet[2498]: E0913 00:24:26.730775 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.730991 kubelet[2498]: W0913 00:24:26.730870 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.730991 kubelet[2498]: E0913 00:24:26.730896 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.731798 kubelet[2498]: E0913 00:24:26.731686 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.731798 kubelet[2498]: W0913 00:24:26.731707 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.731798 kubelet[2498]: E0913 00:24:26.731719 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.743226 kubelet[2498]: E0913 00:24:26.743197 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.745589 kubelet[2498]: W0913 00:24:26.745490 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.745589 kubelet[2498]: E0913 00:24:26.745538 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.772869 kubelet[2498]: E0913 00:24:26.772659 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.772869 kubelet[2498]: W0913 00:24:26.772694 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.772869 kubelet[2498]: E0913 00:24:26.772727 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.772869 kubelet[2498]: I0913 00:24:26.772778 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a1f681a-96b5-4792-936c-830bdc4fc67f-registration-dir\") pod \"csi-node-driver-gm62f\" (UID: \"7a1f681a-96b5-4792-936c-830bdc4fc67f\") " pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:26.775707 kubelet[2498]: E0913 00:24:26.775520 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.775707 kubelet[2498]: W0913 00:24:26.775554 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.775707 kubelet[2498]: E0913 00:24:26.775592 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.775707 kubelet[2498]: I0913 00:24:26.775631 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7a1f681a-96b5-4792-936c-830bdc4fc67f-varrun\") pod \"csi-node-driver-gm62f\" (UID: \"7a1f681a-96b5-4792-936c-830bdc4fc67f\") " pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:26.776063 kubelet[2498]: E0913 00:24:26.776040 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.776063 kubelet[2498]: W0913 00:24:26.776062 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.776330 kubelet[2498]: E0913 00:24:26.776090 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.776404 kubelet[2498]: E0913 00:24:26.776350 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.776404 kubelet[2498]: W0913 00:24:26.776362 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.776404 kubelet[2498]: E0913 00:24:26.776380 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.777640 kubelet[2498]: E0913 00:24:26.777593 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.777640 kubelet[2498]: W0913 00:24:26.777631 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.777640 kubelet[2498]: E0913 00:24:26.777649 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.778024 kubelet[2498]: I0913 00:24:26.777676 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wl9\" (UniqueName: \"kubernetes.io/projected/7a1f681a-96b5-4792-936c-830bdc4fc67f-kube-api-access-v8wl9\") pod \"csi-node-driver-gm62f\" (UID: \"7a1f681a-96b5-4792-936c-830bdc4fc67f\") " pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:26.778024 kubelet[2498]: E0913 00:24:26.777986 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.778024 kubelet[2498]: W0913 00:24:26.778003 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.778297 kubelet[2498]: E0913 00:24:26.778189 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.778297 kubelet[2498]: W0913 00:24:26.778201 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.778297 kubelet[2498]: E0913 00:24:26.778204 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.778297 kubelet[2498]: I0913 00:24:26.778248 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a1f681a-96b5-4792-936c-830bdc4fc67f-kubelet-dir\") pod \"csi-node-driver-gm62f\" (UID: \"7a1f681a-96b5-4792-936c-830bdc4fc67f\") " pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:26.778297 kubelet[2498]: E0913 00:24:26.778214 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.778626 kubelet[2498]: E0913 00:24:26.778372 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.778626 kubelet[2498]: W0913 00:24:26.778382 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.778626 kubelet[2498]: E0913 00:24:26.778396 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.778626 kubelet[2498]: E0913 00:24:26.778560 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.778626 kubelet[2498]: W0913 00:24:26.778567 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.778626 kubelet[2498]: E0913 00:24:26.778583 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.778626 kubelet[2498]: I0913 00:24:26.778623 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a1f681a-96b5-4792-936c-830bdc4fc67f-socket-dir\") pod \"csi-node-driver-gm62f\" (UID: \"7a1f681a-96b5-4792-936c-830bdc4fc67f\") " pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:26.780646 kubelet[2498]: E0913 00:24:26.780614 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.780646 kubelet[2498]: W0913 00:24:26.780639 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.780835 kubelet[2498]: E0913 00:24:26.780663 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.780901 kubelet[2498]: E0913 00:24:26.780885 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.780993 kubelet[2498]: W0913 00:24:26.780900 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.780993 kubelet[2498]: E0913 00:24:26.780921 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.787717 kubelet[2498]: E0913 00:24:26.787417 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.787717 kubelet[2498]: W0913 00:24:26.787494 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.787717 kubelet[2498]: E0913 00:24:26.787533 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.790468 kubelet[2498]: E0913 00:24:26.788874 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.790468 kubelet[2498]: W0913 00:24:26.788909 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.790468 kubelet[2498]: E0913 00:24:26.788943 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.791547 kubelet[2498]: E0913 00:24:26.791513 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.791723 kubelet[2498]: W0913 00:24:26.791697 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.791827 kubelet[2498]: E0913 00:24:26.791809 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.795317 kubelet[2498]: E0913 00:24:26.795270 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.795610 kubelet[2498]: W0913 00:24:26.795579 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.795759 kubelet[2498]: E0913 00:24:26.795740 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.882781 kubelet[2498]: E0913 00:24:26.882743 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.883093 kubelet[2498]: W0913 00:24:26.882935 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.883093 kubelet[2498]: E0913 00:24:26.882968 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.883455 kubelet[2498]: E0913 00:24:26.883383 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.883455 kubelet[2498]: W0913 00:24:26.883396 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.883455 kubelet[2498]: E0913 00:24:26.883414 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.884141 kubelet[2498]: E0913 00:24:26.884104 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.884274 kubelet[2498]: W0913 00:24:26.884126 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.884330 kubelet[2498]: E0913 00:24:26.884285 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.884681 kubelet[2498]: E0913 00:24:26.884661 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.884681 kubelet[2498]: W0913 00:24:26.884676 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.884802 kubelet[2498]: E0913 00:24:26.884688 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.885044 kubelet[2498]: E0913 00:24:26.885028 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.885044 kubelet[2498]: W0913 00:24:26.885041 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.885206 kubelet[2498]: E0913 00:24:26.885103 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.885459 kubelet[2498]: E0913 00:24:26.885395 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.885459 kubelet[2498]: W0913 00:24:26.885455 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.885696 kubelet[2498]: E0913 00:24:26.885525 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.885833 kubelet[2498]: E0913 00:24:26.885818 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.885833 kubelet[2498]: W0913 00:24:26.885831 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.886582 kubelet[2498]: E0913 00:24:26.886501 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.886582 kubelet[2498]: E0913 00:24:26.886536 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.886582 kubelet[2498]: W0913 00:24:26.886548 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.886582 kubelet[2498]: E0913 00:24:26.886569 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.887206 kubelet[2498]: E0913 00:24:26.887081 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.887206 kubelet[2498]: W0913 00:24:26.887098 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.887206 kubelet[2498]: E0913 00:24:26.887120 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.887480 kubelet[2498]: E0913 00:24:26.887369 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.887480 kubelet[2498]: W0913 00:24:26.887381 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.887480 kubelet[2498]: E0913 00:24:26.887398 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.887826 kubelet[2498]: E0913 00:24:26.887807 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.887826 kubelet[2498]: W0913 00:24:26.887824 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.888020 kubelet[2498]: E0913 00:24:26.887947 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.888680 kubelet[2498]: E0913 00:24:26.888659 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.888680 kubelet[2498]: W0913 00:24:26.888679 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.888824 kubelet[2498]: E0913 00:24:26.888743 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.889091 kubelet[2498]: E0913 00:24:26.889075 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.889091 kubelet[2498]: W0913 00:24:26.889090 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.889278 kubelet[2498]: E0913 00:24:26.889146 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.890551 kubelet[2498]: E0913 00:24:26.890529 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.890551 kubelet[2498]: W0913 00:24:26.890550 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.890711 kubelet[2498]: E0913 00:24:26.890618 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.890756 kubelet[2498]: E0913 00:24:26.890743 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.890756 kubelet[2498]: W0913 00:24:26.890755 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.891423 kubelet[2498]: E0913 00:24:26.890806 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.891423 kubelet[2498]: E0913 00:24:26.890961 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.891423 kubelet[2498]: W0913 00:24:26.890976 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.891423 kubelet[2498]: E0913 00:24:26.891148 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.891423 kubelet[2498]: E0913 00:24:26.891217 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.891423 kubelet[2498]: W0913 00:24:26.891224 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.891423 kubelet[2498]: E0913 00:24:26.891269 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.891716 kubelet[2498]: E0913 00:24:26.891449 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.891716 kubelet[2498]: W0913 00:24:26.891457 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.891716 kubelet[2498]: E0913 00:24:26.891473 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.891946 kubelet[2498]: E0913 00:24:26.891929 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.891991 kubelet[2498]: W0913 00:24:26.891947 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.891991 kubelet[2498]: E0913 00:24:26.891985 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.892474 kubelet[2498]: E0913 00:24:26.892456 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.892474 kubelet[2498]: W0913 00:24:26.892469 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.892551 kubelet[2498]: E0913 00:24:26.892482 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.892992 kubelet[2498]: E0913 00:24:26.892970 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.892992 kubelet[2498]: W0913 00:24:26.892985 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.893081 kubelet[2498]: E0913 00:24:26.892998 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.893290 kubelet[2498]: E0913 00:24:26.893272 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.893290 kubelet[2498]: W0913 00:24:26.893286 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.893356 kubelet[2498]: E0913 00:24:26.893298 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.894264 kubelet[2498]: E0913 00:24:26.894234 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.894344 kubelet[2498]: W0913 00:24:26.894256 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.894344 kubelet[2498]: E0913 00:24:26.894299 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.895643 kubelet[2498]: E0913 00:24:26.895622 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.895643 kubelet[2498]: W0913 00:24:26.895640 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.895725 kubelet[2498]: E0913 00:24:26.895661 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.896634 kubelet[2498]: E0913 00:24:26.896610 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.896634 kubelet[2498]: W0913 00:24:26.896630 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.896751 kubelet[2498]: E0913 00:24:26.896646 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:26.912332 kubelet[2498]: E0913 00:24:26.912197 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:26.912332 kubelet[2498]: W0913 00:24:26.912221 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:26.912332 kubelet[2498]: E0913 00:24:26.912283 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:27.022155 containerd[1465]: time="2025-09-13T00:24:27.021587267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vbqd7,Uid:eeb0e0a9-98f6-4668-9e69-f4dd211731f9,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:27.073039 containerd[1465]: time="2025-09-13T00:24:27.072550111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:27.076129 containerd[1465]: time="2025-09-13T00:24:27.074715950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:27.076129 containerd[1465]: time="2025-09-13T00:24:27.074771119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:27.076129 containerd[1465]: time="2025-09-13T00:24:27.074918569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:27.106315 systemd[1]: Started cri-containerd-ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135.scope - libcontainer container ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135. Sep 13 00:24:27.142539 containerd[1465]: time="2025-09-13T00:24:27.142470159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vbqd7,Uid:eeb0e0a9-98f6-4668-9e69-f4dd211731f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\"" Sep 13 00:24:28.129233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126480862.mount: Deactivated successfully. Sep 13 00:24:28.242549 kubelet[2498]: E0913 00:24:28.242485 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:29.597505 containerd[1465]: time="2025-09-13T00:24:29.597148544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:29.598025 containerd[1465]: time="2025-09-13T00:24:29.597980361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:24:29.598774 containerd[1465]: time="2025-09-13T00:24:29.598739654Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:29.600918 containerd[1465]: time="2025-09-13T00:24:29.600620515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:29.601303 containerd[1465]: time="2025-09-13T00:24:29.601272844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.983610165s" Sep 13 00:24:29.601363 containerd[1465]: time="2025-09-13T00:24:29.601302410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:24:29.603365 containerd[1465]: time="2025-09-13T00:24:29.603338929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:24:29.640235 containerd[1465]: time="2025-09-13T00:24:29.640178397Z" level=info msg="CreateContainer within sandbox \"0de1d10af96ee00c1b4f551aa6cce32d594d5321460d6a6f5aa13925dc24ce42\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:24:29.668098 containerd[1465]: time="2025-09-13T00:24:29.668041235Z" level=info msg="CreateContainer within sandbox \"0de1d10af96ee00c1b4f551aa6cce32d594d5321460d6a6f5aa13925dc24ce42\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ad88b60d49554523d52ec95e6023198e6631d5d6fd669d90bea4e245a6bf9b6\"" Sep 13 00:24:29.670548 containerd[1465]: time="2025-09-13T00:24:29.669329137Z" level=info msg="StartContainer for \"5ad88b60d49554523d52ec95e6023198e6631d5d6fd669d90bea4e245a6bf9b6\"" Sep 13 00:24:29.813970 systemd[1]: Started cri-containerd-5ad88b60d49554523d52ec95e6023198e6631d5d6fd669d90bea4e245a6bf9b6.scope - libcontainer container 5ad88b60d49554523d52ec95e6023198e6631d5d6fd669d90bea4e245a6bf9b6. Sep 13 00:24:29.941651 containerd[1465]: time="2025-09-13T00:24:29.941589060Z" level=info msg="StartContainer for \"5ad88b60d49554523d52ec95e6023198e6631d5d6fd669d90bea4e245a6bf9b6\" returns successfully" Sep 13 00:24:30.243451 kubelet[2498]: E0913 00:24:30.243161 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:30.406767 kubelet[2498]: E0913 00:24:30.406716 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:30.449274 kubelet[2498]: I0913 00:24:30.449176 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d4987896-gvt69" podStartSLOduration=1.463030252 podStartE2EDuration="4.448516363s" podCreationTimestamp="2025-09-13 00:24:26 +0000 UTC" firstStartedPulling="2025-09-13 00:24:26.616937889 +0000 UTC m=+21.499007036" lastFinishedPulling="2025-09-13 00:24:29.602423986 +0000 UTC m=+24.484493147" observedRunningTime="2025-09-13 00:24:30.443767164 +0000 UTC m=+25.325836330" watchObservedRunningTime="2025-09-13 00:24:30.448516363 +0000 UTC m=+25.330585529" Sep 13 00:24:30.459583 kubelet[2498]: E0913 00:24:30.459534 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.459583 kubelet[2498]: W0913 00:24:30.459574 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.459752 kubelet[2498]: E0913 00:24:30.459600 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.460225 kubelet[2498]: E0913 00:24:30.459814 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.460225 kubelet[2498]: W0913 00:24:30.459823 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.460225 kubelet[2498]: E0913 00:24:30.459835 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.460225 kubelet[2498]: E0913 00:24:30.460045 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.460225 kubelet[2498]: W0913 00:24:30.460055 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.460225 kubelet[2498]: E0913 00:24:30.460069 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.461261 kubelet[2498]: E0913 00:24:30.461085 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.461261 kubelet[2498]: W0913 00:24:30.461107 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.461261 kubelet[2498]: E0913 00:24:30.461142 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.462558 kubelet[2498]: E0913 00:24:30.461548 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.462558 kubelet[2498]: W0913 00:24:30.461562 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.462558 kubelet[2498]: E0913 00:24:30.461577 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.462916 kubelet[2498]: E0913 00:24:30.462790 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.462916 kubelet[2498]: W0913 00:24:30.462804 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.462916 kubelet[2498]: E0913 00:24:30.462819 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.463299 kubelet[2498]: E0913 00:24:30.463167 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.463299 kubelet[2498]: W0913 00:24:30.463190 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.463299 kubelet[2498]: E0913 00:24:30.463212 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.464024 kubelet[2498]: E0913 00:24:30.463900 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.464024 kubelet[2498]: W0913 00:24:30.463913 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.464024 kubelet[2498]: E0913 00:24:30.463925 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.464204 kubelet[2498]: E0913 00:24:30.464195 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.464296 kubelet[2498]: W0913 00:24:30.464246 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.464296 kubelet[2498]: E0913 00:24:30.464259 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.464643 kubelet[2498]: E0913 00:24:30.464584 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.464643 kubelet[2498]: W0913 00:24:30.464595 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.464643 kubelet[2498]: E0913 00:24:30.464606 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.465005 kubelet[2498]: E0913 00:24:30.464942 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.465005 kubelet[2498]: W0913 00:24:30.464953 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.465005 kubelet[2498]: E0913 00:24:30.464963 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.465331 kubelet[2498]: E0913 00:24:30.465245 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.465331 kubelet[2498]: W0913 00:24:30.465254 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.465331 kubelet[2498]: E0913 00:24:30.465264 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.466246 kubelet[2498]: E0913 00:24:30.466125 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.466246 kubelet[2498]: W0913 00:24:30.466141 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.466246 kubelet[2498]: E0913 00:24:30.466154 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.466534 kubelet[2498]: E0913 00:24:30.466421 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.466534 kubelet[2498]: W0913 00:24:30.466446 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.466534 kubelet[2498]: E0913 00:24:30.466457 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.466928 kubelet[2498]: E0913 00:24:30.466845 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.466928 kubelet[2498]: W0913 00:24:30.466857 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.466928 kubelet[2498]: E0913 00:24:30.466868 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.519221 kubelet[2498]: E0913 00:24:30.517892 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.519221 kubelet[2498]: W0913 00:24:30.518499 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.519221 kubelet[2498]: E0913 00:24:30.518563 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.520997 kubelet[2498]: E0913 00:24:30.520970 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.521736 kubelet[2498]: W0913 00:24:30.521616 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.522523 kubelet[2498]: E0913 00:24:30.522062 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.523489 kubelet[2498]: E0913 00:24:30.523388 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.523489 kubelet[2498]: W0913 00:24:30.523405 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.523716 kubelet[2498]: E0913 00:24:30.523686 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.524600 kubelet[2498]: E0913 00:24:30.524547 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.524600 kubelet[2498]: W0913 00:24:30.524572 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.524991 kubelet[2498]: E0913 00:24:30.524829 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.525171 kubelet[2498]: E0913 00:24:30.525134 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.525171 kubelet[2498]: W0913 00:24:30.525151 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.526680 kubelet[2498]: E0913 00:24:30.526544 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.526921 kubelet[2498]: E0913 00:24:30.526908 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.527055 kubelet[2498]: W0913 00:24:30.526974 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.527167 kubelet[2498]: E0913 00:24:30.527134 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.527498 kubelet[2498]: E0913 00:24:30.527380 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.527498 kubelet[2498]: W0913 00:24:30.527393 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.527633 kubelet[2498]: E0913 00:24:30.527619 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.527836 kubelet[2498]: E0913 00:24:30.527731 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.527836 kubelet[2498]: W0913 00:24:30.527740 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.527836 kubelet[2498]: E0913 00:24:30.527757 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.528100 kubelet[2498]: E0913 00:24:30.527995 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.528100 kubelet[2498]: W0913 00:24:30.528005 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.528100 kubelet[2498]: E0913 00:24:30.528022 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.528395 kubelet[2498]: E0913 00:24:30.528382 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.528642 kubelet[2498]: W0913 00:24:30.528460 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.528642 kubelet[2498]: E0913 00:24:30.528481 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.528771 kubelet[2498]: E0913 00:24:30.528750 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.528771 kubelet[2498]: W0913 00:24:30.528768 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.528834 kubelet[2498]: E0913 00:24:30.528784 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.529788 kubelet[2498]: E0913 00:24:30.529630 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.529788 kubelet[2498]: W0913 00:24:30.529650 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.529788 kubelet[2498]: E0913 00:24:30.529665 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.529920 kubelet[2498]: E0913 00:24:30.529887 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.529920 kubelet[2498]: W0913 00:24:30.529895 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.529920 kubelet[2498]: E0913 00:24:30.529904 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.530418 kubelet[2498]: E0913 00:24:30.530395 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.530562 kubelet[2498]: W0913 00:24:30.530505 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.531036 kubelet[2498]: E0913 00:24:30.530706 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.531036 kubelet[2498]: E0913 00:24:30.530838 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.531036 kubelet[2498]: W0913 00:24:30.530847 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.531036 kubelet[2498]: E0913 00:24:30.530863 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.531626 kubelet[2498]: E0913 00:24:30.531607 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.531700 kubelet[2498]: W0913 00:24:30.531688 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.531864 kubelet[2498]: E0913 00:24:30.531749 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.532032 kubelet[2498]: E0913 00:24:30.532013 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.532032 kubelet[2498]: W0913 00:24:30.532031 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.532099 kubelet[2498]: E0913 00:24:30.532047 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.533081 kubelet[2498]: E0913 00:24:30.533058 2498 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:30.533081 kubelet[2498]: W0913 00:24:30.533075 2498 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:30.533174 kubelet[2498]: E0913 00:24:30.533087 2498 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:30.617349 systemd[1]: run-containerd-runc-k8s.io-5ad88b60d49554523d52ec95e6023198e6631d5d6fd669d90bea4e245a6bf9b6-runc.gCxWdJ.mount: Deactivated successfully. Sep 13 00:24:31.069759 containerd[1465]: time="2025-09-13T00:24:31.069656894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:31.070662 containerd[1465]: time="2025-09-13T00:24:31.070517197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:24:31.071205 containerd[1465]: time="2025-09-13T00:24:31.071163640Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:31.074034 containerd[1465]: time="2025-09-13T00:24:31.073682790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:31.074547 containerd[1465]: time="2025-09-13T00:24:31.074513706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.471142249s" Sep 13 00:24:31.074547 containerd[1465]: time="2025-09-13T00:24:31.074551458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:24:31.079064 containerd[1465]: time="2025-09-13T00:24:31.078986724Z" level=info msg="CreateContainer within sandbox \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:24:31.114407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770665238.mount: Deactivated successfully. Sep 13 00:24:31.148793 containerd[1465]: time="2025-09-13T00:24:31.148713912Z" level=info msg="CreateContainer within sandbox \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb\"" Sep 13 00:24:31.149806 containerd[1465]: time="2025-09-13T00:24:31.149664403Z" level=info msg="StartContainer for \"41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb\"" Sep 13 00:24:31.236716 systemd[1]: Started cri-containerd-41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb.scope - libcontainer container 41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb. Sep 13 00:24:31.283030 containerd[1465]: time="2025-09-13T00:24:31.282984602Z" level=info msg="StartContainer for \"41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb\" returns successfully" Sep 13 00:24:31.301821 systemd[1]: cri-containerd-41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb.scope: Deactivated successfully. Sep 13 00:24:31.415473 kubelet[2498]: I0913 00:24:31.414286 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:31.415473 kubelet[2498]: E0913 00:24:31.414700 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:31.437932 containerd[1465]: time="2025-09-13T00:24:31.404923691Z" level=info msg="shim disconnected" id=41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb namespace=k8s.io Sep 13 00:24:31.437932 containerd[1465]: time="2025-09-13T00:24:31.437757404Z" level=warning msg="cleaning up after shim disconnected" id=41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb namespace=k8s.io Sep 13 00:24:31.437932 containerd[1465]: time="2025-09-13T00:24:31.437783783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:24:31.617366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41292a8f13b38894a06d2c8d2471061a8d1c7f190626e2927dd78a115fbd64eb-rootfs.mount: Deactivated successfully. Sep 13 00:24:32.242189 kubelet[2498]: E0913 00:24:32.242074 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:32.423196 containerd[1465]: time="2025-09-13T00:24:32.422825781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:24:34.242620 kubelet[2498]: E0913 00:24:34.242569 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:36.091150 containerd[1465]: time="2025-09-13T00:24:36.091080675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:36.092016 containerd[1465]: time="2025-09-13T00:24:36.091876607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:24:36.092732 containerd[1465]: time="2025-09-13T00:24:36.092411929Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:36.094518 containerd[1465]: time="2025-09-13T00:24:36.094486239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:36.095429 containerd[1465]: time="2025-09-13T00:24:36.095397346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.672516219s" Sep 13 00:24:36.095523 containerd[1465]: time="2025-09-13T00:24:36.095448034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:24:36.106338 containerd[1465]: time="2025-09-13T00:24:36.106274229Z" level=info msg="CreateContainer within sandbox \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:24:36.172595 containerd[1465]: time="2025-09-13T00:24:36.172458980Z" level=info msg="CreateContainer within sandbox \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4\"" Sep 13 00:24:36.173471 containerd[1465]: time="2025-09-13T00:24:36.173305587Z" level=info msg="StartContainer for \"fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4\"" Sep 13 00:24:36.216739 systemd[1]: Started cri-containerd-fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4.scope - libcontainer container fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4. Sep 13 00:24:36.242411 kubelet[2498]: E0913 00:24:36.242330 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:36.253366 containerd[1465]: time="2025-09-13T00:24:36.253316982Z" level=info msg="StartContainer for \"fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4\" returns successfully" Sep 13 00:24:36.960207 systemd[1]: cri-containerd-fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4.scope: Deactivated successfully. Sep 13 00:24:36.997259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4-rootfs.mount: Deactivated successfully. Sep 13 00:24:37.001602 containerd[1465]: time="2025-09-13T00:24:37.001498089Z" level=info msg="shim disconnected" id=fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4 namespace=k8s.io Sep 13 00:24:37.001942 containerd[1465]: time="2025-09-13T00:24:37.001602141Z" level=warning msg="cleaning up after shim disconnected" id=fcbb558ae31afc0ad09e7a91e4119c0f28d2fa7897db0b02f6cb4e65813cbfa4 namespace=k8s.io Sep 13 00:24:37.001942 containerd[1465]: time="2025-09-13T00:24:37.001638043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:24:37.012629 kubelet[2498]: I0913 00:24:37.011801 2498 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:24:37.085228 systemd[1]: Created slice kubepods-burstable-pod8a5a1c5a_3908_4e95_aa11_b97be572df2c.slice - libcontainer container kubepods-burstable-pod8a5a1c5a_3908_4e95_aa11_b97be572df2c.slice. Sep 13 00:24:37.099461 kubelet[2498]: W0913 00:24:37.098162 2498 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.5-n-9b8e9ee716" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-9b8e9ee716' and this object Sep 13 00:24:37.099461 kubelet[2498]: W0913 00:24:37.098404 2498 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.5-n-9b8e9ee716" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-n-9b8e9ee716' and this object Sep 13 00:24:37.100016 kubelet[2498]: E0913 00:24:37.099970 2498 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.5-n-9b8e9ee716\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-9b8e9ee716' and this object" logger="UnhandledError" Sep 13 00:24:37.100418 systemd[1]: Created slice kubepods-besteffort-podfb440148_9fbd_4f08_a9ed_06e94ecc9e57.slice - libcontainer container kubepods-besteffort-podfb440148_9fbd_4f08_a9ed_06e94ecc9e57.slice. Sep 13 00:24:37.101792 kubelet[2498]: E0913 00:24:37.101240 2498 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.5-n-9b8e9ee716\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-n-9b8e9ee716' and this object" logger="UnhandledError" Sep 13 00:24:37.120880 systemd[1]: Created slice kubepods-burstable-pod3c03440a_ff3f_462d_ba46_2398b0c778c8.slice - libcontainer container kubepods-burstable-pod3c03440a_ff3f_462d_ba46_2398b0c778c8.slice. Sep 13 00:24:37.133002 systemd[1]: Created slice kubepods-besteffort-podfd8eee52_c543_48a2_abe7_510261dd737e.slice - libcontainer container kubepods-besteffort-podfd8eee52_c543_48a2_abe7_510261dd737e.slice. Sep 13 00:24:37.142442 systemd[1]: Created slice kubepods-besteffort-podeedd9846_66f8_4fbc_912d_f953222ec80b.slice - libcontainer container kubepods-besteffort-podeedd9846_66f8_4fbc_912d_f953222ec80b.slice. Sep 13 00:24:37.166216 systemd[1]: Created slice kubepods-besteffort-pod88fd7908_d362_45b9_9c05_84c56d420f5b.slice - libcontainer container kubepods-besteffort-pod88fd7908_d362_45b9_9c05_84c56d420f5b.slice. Sep 13 00:24:37.171832 systemd[1]: Created slice kubepods-besteffort-pod4a2b5f2c_0765_434f_910d_07d9f5ff57ab.slice - libcontainer container kubepods-besteffort-pod4a2b5f2c_0765_434f_910d_07d9f5ff57ab.slice. Sep 13 00:24:37.183819 systemd[1]: Created slice kubepods-besteffort-podc14c5f57_0bd2_4e4c_bbc8_39406c393d42.slice - libcontainer container kubepods-besteffort-podc14c5f57_0bd2_4e4c_bbc8_39406c393d42.slice. Sep 13 00:24:37.193948 kubelet[2498]: I0913 00:24:37.193903 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c03440a-ff3f-462d-ba46-2398b0c778c8-config-volume\") pod \"coredns-668d6bf9bc-bmgkp\" (UID: \"3c03440a-ff3f-462d-ba46-2398b0c778c8\") " pod="kube-system/coredns-668d6bf9bc-bmgkp" Sep 13 00:24:37.193948 kubelet[2498]: I0913 00:24:37.193953 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6bg2\" (UniqueName: \"kubernetes.io/projected/3c03440a-ff3f-462d-ba46-2398b0c778c8-kube-api-access-c6bg2\") pod \"coredns-668d6bf9bc-bmgkp\" (UID: \"3c03440a-ff3f-462d-ba46-2398b0c778c8\") " pod="kube-system/coredns-668d6bf9bc-bmgkp" Sep 13 00:24:37.194299 kubelet[2498]: I0913 00:24:37.193978 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb440148-9fbd-4f08-a9ed-06e94ecc9e57-tigera-ca-bundle\") pod \"calico-kube-controllers-856fbd7bbd-gmpj2\" (UID: \"fb440148-9fbd-4f08-a9ed-06e94ecc9e57\") " pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" Sep 13 00:24:37.194299 kubelet[2498]: I0913 00:24:37.193999 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97q52\" (UniqueName: \"kubernetes.io/projected/8a5a1c5a-3908-4e95-aa11-b97be572df2c-kube-api-access-97q52\") pod \"coredns-668d6bf9bc-6tkbb\" (UID: \"8a5a1c5a-3908-4e95-aa11-b97be572df2c\") " pod="kube-system/coredns-668d6bf9bc-6tkbb" Sep 13 00:24:37.194299 kubelet[2498]: I0913 00:24:37.194026 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szkt\" (UniqueName: \"kubernetes.io/projected/fd8eee52-c543-48a2-abe7-510261dd737e-kube-api-access-8szkt\") pod \"whisker-69f4f6c884-bhqhf\" (UID: \"fd8eee52-c543-48a2-abe7-510261dd737e\") " pod="calico-system/whisker-69f4f6c884-bhqhf" Sep 13 00:24:37.194299 kubelet[2498]: I0913 00:24:37.194064 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eedd9846-66f8-4fbc-912d-f953222ec80b-calico-apiserver-certs\") pod \"calico-apiserver-d86d44bf-ff8mw\" (UID: \"eedd9846-66f8-4fbc-912d-f953222ec80b\") " pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" Sep 13 00:24:37.194299 kubelet[2498]: I0913 00:24:37.194081 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-backend-key-pair\") pod \"whisker-69f4f6c884-bhqhf\" (UID: \"fd8eee52-c543-48a2-abe7-510261dd737e\") " pod="calico-system/whisker-69f4f6c884-bhqhf" Sep 13 00:24:37.194739 kubelet[2498]: I0913 00:24:37.194101 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-ca-bundle\") pod \"whisker-69f4f6c884-bhqhf\" (UID: \"fd8eee52-c543-48a2-abe7-510261dd737e\") " pod="calico-system/whisker-69f4f6c884-bhqhf" Sep 13 00:24:37.194739 kubelet[2498]: I0913 00:24:37.194120 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a5a1c5a-3908-4e95-aa11-b97be572df2c-config-volume\") pod \"coredns-668d6bf9bc-6tkbb\" (UID: \"8a5a1c5a-3908-4e95-aa11-b97be572df2c\") " pod="kube-system/coredns-668d6bf9bc-6tkbb" Sep 13 00:24:37.194739 kubelet[2498]: I0913 00:24:37.194138 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vtd\" (UniqueName: \"kubernetes.io/projected/eedd9846-66f8-4fbc-912d-f953222ec80b-kube-api-access-h8vtd\") pod \"calico-apiserver-d86d44bf-ff8mw\" (UID: \"eedd9846-66f8-4fbc-912d-f953222ec80b\") " pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" Sep 13 00:24:37.194739 kubelet[2498]: I0913 00:24:37.194156 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9tdp\" (UniqueName: \"kubernetes.io/projected/fb440148-9fbd-4f08-a9ed-06e94ecc9e57-kube-api-access-t9tdp\") pod \"calico-kube-controllers-856fbd7bbd-gmpj2\" (UID: \"fb440148-9fbd-4f08-a9ed-06e94ecc9e57\") " pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" Sep 13 00:24:37.294966 kubelet[2498]: I0913 00:24:37.294800 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mds4x\" (UniqueName: \"kubernetes.io/projected/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-kube-api-access-mds4x\") pod \"calico-apiserver-66fc9d466c-2qnrs\" (UID: \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\") " pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" Sep 13 00:24:37.294966 kubelet[2498]: I0913 00:24:37.294874 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9g9w\" (UniqueName: \"kubernetes.io/projected/88fd7908-d362-45b9-9c05-84c56d420f5b-kube-api-access-q9g9w\") pod \"calico-apiserver-66fc9d466c-fpvl5\" (UID: \"88fd7908-d362-45b9-9c05-84c56d420f5b\") " pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" Sep 13 00:24:37.294966 kubelet[2498]: I0913 00:24:37.294893 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4a2b5f2c-0765-434f-910d-07d9f5ff57ab-goldmane-key-pair\") pod \"goldmane-54d579b49d-nn2np\" (UID: \"4a2b5f2c-0765-434f-910d-07d9f5ff57ab\") " pod="calico-system/goldmane-54d579b49d-nn2np" Sep 13 00:24:37.294966 kubelet[2498]: I0913 00:24:37.294913 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92kph\" (UniqueName: \"kubernetes.io/projected/4a2b5f2c-0765-434f-910d-07d9f5ff57ab-kube-api-access-92kph\") pod \"goldmane-54d579b49d-nn2np\" (UID: \"4a2b5f2c-0765-434f-910d-07d9f5ff57ab\") " pod="calico-system/goldmane-54d579b49d-nn2np" Sep 13 00:24:37.294966 kubelet[2498]: I0913 00:24:37.294939 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a2b5f2c-0765-434f-910d-07d9f5ff57ab-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-nn2np\" (UID: \"4a2b5f2c-0765-434f-910d-07d9f5ff57ab\") " pod="calico-system/goldmane-54d579b49d-nn2np" Sep 13 00:24:37.295711 kubelet[2498]: I0913 00:24:37.295000 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-calico-apiserver-certs\") pod \"calico-apiserver-66fc9d466c-2qnrs\" (UID: \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\") " pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" Sep 13 00:24:37.295711 kubelet[2498]: I0913 00:24:37.295017 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a2b5f2c-0765-434f-910d-07d9f5ff57ab-config\") pod \"goldmane-54d579b49d-nn2np\" (UID: \"4a2b5f2c-0765-434f-910d-07d9f5ff57ab\") " pod="calico-system/goldmane-54d579b49d-nn2np" Sep 13 00:24:37.295711 kubelet[2498]: I0913 00:24:37.295052 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/88fd7908-d362-45b9-9c05-84c56d420f5b-calico-apiserver-certs\") pod \"calico-apiserver-66fc9d466c-fpvl5\" (UID: \"88fd7908-d362-45b9-9c05-84c56d420f5b\") " pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" Sep 13 00:24:37.391640 kubelet[2498]: E0913 00:24:37.391570 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:37.392424 containerd[1465]: time="2025-09-13T00:24:37.392360925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkbb,Uid:8a5a1c5a-3908-4e95-aa11-b97be572df2c,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:37.414140 containerd[1465]: time="2025-09-13T00:24:37.413747790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856fbd7bbd-gmpj2,Uid:fb440148-9fbd-4f08-a9ed-06e94ecc9e57,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:37.434478 kubelet[2498]: E0913 00:24:37.433151 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:37.442566 containerd[1465]: time="2025-09-13T00:24:37.442506239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bmgkp,Uid:3c03440a-ff3f-462d-ba46-2398b0c778c8,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:37.456998 containerd[1465]: time="2025-09-13T00:24:37.456957240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:24:37.460099 containerd[1465]: time="2025-09-13T00:24:37.459208698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d86d44bf-ff8mw,Uid:eedd9846-66f8-4fbc-912d-f953222ec80b,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:24:37.481477 containerd[1465]: time="2025-09-13T00:24:37.480324307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-fpvl5,Uid:88fd7908-d362-45b9-9c05-84c56d420f5b,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:24:37.487447 containerd[1465]: time="2025-09-13T00:24:37.487384932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nn2np,Uid:4a2b5f2c-0765-434f-910d-07d9f5ff57ab,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:37.491461 containerd[1465]: time="2025-09-13T00:24:37.491388313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-2qnrs,Uid:c14c5f57-0bd2-4e4c-bbc8-39406c393d42,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:24:37.911651 containerd[1465]: time="2025-09-13T00:24:37.911484818Z" level=error msg="Failed to destroy network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.915504 containerd[1465]: time="2025-09-13T00:24:37.915048547Z" level=error msg="Failed to destroy network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.918553 containerd[1465]: time="2025-09-13T00:24:37.918410669Z" level=error msg="encountered an error cleaning up failed sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.918729 containerd[1465]: time="2025-09-13T00:24:37.918581779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkbb,Uid:8a5a1c5a-3908-4e95-aa11-b97be572df2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.919128 containerd[1465]: time="2025-09-13T00:24:37.918832588Z" level=error msg="encountered an error cleaning up failed sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.920165 containerd[1465]: time="2025-09-13T00:24:37.920096346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bmgkp,Uid:3c03440a-ff3f-462d-ba46-2398b0c778c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.921322 kubelet[2498]: E0913 00:24:37.920623 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.921322 kubelet[2498]: E0913 00:24:37.920714 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6tkbb" Sep 13 00:24:37.921322 kubelet[2498]: E0913 00:24:37.920746 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6tkbb" Sep 13 00:24:37.921611 kubelet[2498]: E0913 00:24:37.920809 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6tkbb_kube-system(8a5a1c5a-3908-4e95-aa11-b97be572df2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6tkbb_kube-system(8a5a1c5a-3908-4e95-aa11-b97be572df2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6tkbb" podUID="8a5a1c5a-3908-4e95-aa11-b97be572df2c" Sep 13 00:24:37.934766 containerd[1465]: time="2025-09-13T00:24:37.934649583Z" level=error msg="Failed to destroy network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.935194 containerd[1465]: time="2025-09-13T00:24:37.935002970Z" level=error msg="encountered an error cleaning up failed sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.935194 containerd[1465]: time="2025-09-13T00:24:37.935066020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nn2np,Uid:4a2b5f2c-0765-434f-910d-07d9f5ff57ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.935366 containerd[1465]: time="2025-09-13T00:24:37.935295287Z" level=error msg="Failed to destroy network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.936900 containerd[1465]: time="2025-09-13T00:24:37.935605734Z" level=error msg="encountered an error cleaning up failed sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.936900 containerd[1465]: time="2025-09-13T00:24:37.935652958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856fbd7bbd-gmpj2,Uid:fb440148-9fbd-4f08-a9ed-06e94ecc9e57,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.937139 kubelet[2498]: E0913 00:24:37.935635 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.937139 kubelet[2498]: E0913 00:24:37.935709 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bmgkp" Sep 13 00:24:37.937139 kubelet[2498]: E0913 00:24:37.935741 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bmgkp" Sep 13 00:24:37.937304 kubelet[2498]: E0913 00:24:37.935795 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bmgkp_kube-system(3c03440a-ff3f-462d-ba46-2398b0c778c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bmgkp_kube-system(3c03440a-ff3f-462d-ba46-2398b0c778c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bmgkp" podUID="3c03440a-ff3f-462d-ba46-2398b0c778c8" Sep 13 00:24:37.939194 kubelet[2498]: E0913 00:24:37.938179 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.939194 kubelet[2498]: E0913 00:24:37.938253 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" Sep 13 00:24:37.939194 kubelet[2498]: E0913 00:24:37.938286 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" Sep 13 00:24:37.939554 kubelet[2498]: E0913 00:24:37.938351 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-856fbd7bbd-gmpj2_calico-system(fb440148-9fbd-4f08-a9ed-06e94ecc9e57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-856fbd7bbd-gmpj2_calico-system(fb440148-9fbd-4f08-a9ed-06e94ecc9e57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" podUID="fb440148-9fbd-4f08-a9ed-06e94ecc9e57" Sep 13 00:24:37.939554 kubelet[2498]: E0913 00:24:37.938404 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.939554 kubelet[2498]: E0913 00:24:37.938461 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-nn2np" Sep 13 00:24:37.939749 kubelet[2498]: E0913 00:24:37.938482 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-nn2np" Sep 13 00:24:37.939749 kubelet[2498]: E0913 00:24:37.938517 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-nn2np_calico-system(4a2b5f2c-0765-434f-910d-07d9f5ff57ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-nn2np_calico-system(4a2b5f2c-0765-434f-910d-07d9f5ff57ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-nn2np" podUID="4a2b5f2c-0765-434f-910d-07d9f5ff57ab" Sep 13 00:24:37.941494 containerd[1465]: time="2025-09-13T00:24:37.941129569Z" level=error msg="Failed to destroy network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.943129 containerd[1465]: time="2025-09-13T00:24:37.943014518Z" level=error msg="encountered an error cleaning up failed sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.944726 containerd[1465]: time="2025-09-13T00:24:37.944490021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-2qnrs,Uid:c14c5f57-0bd2-4e4c-bbc8-39406c393d42,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.946164 kubelet[2498]: E0913 00:24:37.945026 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.946164 kubelet[2498]: E0913 00:24:37.945113 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" Sep 13 00:24:37.946164 kubelet[2498]: E0913 00:24:37.945175 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" Sep 13 00:24:37.946411 kubelet[2498]: E0913 00:24:37.945243 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66fc9d466c-2qnrs_calico-apiserver(c14c5f57-0bd2-4e4c-bbc8-39406c393d42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66fc9d466c-2qnrs_calico-apiserver(c14c5f57-0bd2-4e4c-bbc8-39406c393d42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" podUID="c14c5f57-0bd2-4e4c-bbc8-39406c393d42" Sep 13 00:24:37.953476 containerd[1465]: time="2025-09-13T00:24:37.953148316Z" level=error msg="Failed to destroy network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.955254 containerd[1465]: time="2025-09-13T00:24:37.954798563Z" level=error msg="encountered an error cleaning up failed sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.955636 containerd[1465]: time="2025-09-13T00:24:37.955597559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-fpvl5,Uid:88fd7908-d362-45b9-9c05-84c56d420f5b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.962690 kubelet[2498]: E0913 00:24:37.962612 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.962690 kubelet[2498]: E0913 00:24:37.962696 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" Sep 13 00:24:37.963720 kubelet[2498]: E0913 00:24:37.962719 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" Sep 13 00:24:37.963720 kubelet[2498]: E0913 00:24:37.962768 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66fc9d466c-fpvl5_calico-apiserver(88fd7908-d362-45b9-9c05-84c56d420f5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66fc9d466c-fpvl5_calico-apiserver(88fd7908-d362-45b9-9c05-84c56d420f5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" podUID="88fd7908-d362-45b9-9c05-84c56d420f5b" Sep 13 00:24:37.967193 containerd[1465]: time="2025-09-13T00:24:37.967027391Z" level=error msg="Failed to destroy network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.968910 containerd[1465]: time="2025-09-13T00:24:37.968739115Z" level=error msg="encountered an error cleaning up failed sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.968910 containerd[1465]: time="2025-09-13T00:24:37.968838568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d86d44bf-ff8mw,Uid:eedd9846-66f8-4fbc-912d-f953222ec80b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.969566 kubelet[2498]: E0913 00:24:37.969383 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:37.969566 kubelet[2498]: E0913 00:24:37.969518 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" Sep 13 00:24:37.969888 kubelet[2498]: E0913 00:24:37.969550 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" Sep 13 00:24:37.969888 kubelet[2498]: E0913 00:24:37.969836 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d86d44bf-ff8mw_calico-apiserver(eedd9846-66f8-4fbc-912d-f953222ec80b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d86d44bf-ff8mw_calico-apiserver(eedd9846-66f8-4fbc-912d-f953222ec80b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" podUID="eedd9846-66f8-4fbc-912d-f953222ec80b" Sep 13 00:24:38.248545 systemd[1]: Created slice kubepods-besteffort-pod7a1f681a_96b5_4792_936c_830bdc4fc67f.slice - libcontainer container kubepods-besteffort-pod7a1f681a_96b5_4792_936c_830bdc4fc67f.slice. Sep 13 00:24:38.253295 containerd[1465]: time="2025-09-13T00:24:38.253220060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm62f,Uid:7a1f681a-96b5-4792-936c-830bdc4fc67f,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:38.304844 kubelet[2498]: E0913 00:24:38.303665 2498 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:24:38.304844 kubelet[2498]: E0913 00:24:38.303863 2498 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-ca-bundle podName:fd8eee52-c543-48a2-abe7-510261dd737e nodeName:}" failed. No retries permitted until 2025-09-13 00:24:38.803808885 +0000 UTC m=+33.685878042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-ca-bundle") pod "whisker-69f4f6c884-bhqhf" (UID: "fd8eee52-c543-48a2-abe7-510261dd737e") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:24:38.304844 kubelet[2498]: E0913 00:24:38.304560 2498 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 13 00:24:38.305497 kubelet[2498]: E0913 00:24:38.304943 2498 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-backend-key-pair podName:fd8eee52-c543-48a2-abe7-510261dd737e nodeName:}" failed. No retries permitted until 2025-09-13 00:24:38.804763438 +0000 UTC m=+33.686832595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-backend-key-pair") pod "whisker-69f4f6c884-bhqhf" (UID: "fd8eee52-c543-48a2-abe7-510261dd737e") : failed to sync secret cache: timed out waiting for the condition Sep 13 00:24:38.342773 containerd[1465]: time="2025-09-13T00:24:38.341924789Z" level=error msg="Failed to destroy network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.351553 containerd[1465]: time="2025-09-13T00:24:38.344808946Z" level=error msg="encountered an error cleaning up failed sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.351553 containerd[1465]: time="2025-09-13T00:24:38.344928619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm62f,Uid:7a1f681a-96b5-4792-936c-830bdc4fc67f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.346552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a-shm.mount: Deactivated successfully. Sep 13 00:24:38.352475 kubelet[2498]: E0913 00:24:38.345257 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.352475 kubelet[2498]: E0913 00:24:38.345344 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:38.352475 kubelet[2498]: E0913 00:24:38.345398 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gm62f" Sep 13 00:24:38.354615 kubelet[2498]: E0913 00:24:38.345488 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gm62f_calico-system(7a1f681a-96b5-4792-936c-830bdc4fc67f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gm62f_calico-system(7a1f681a-96b5-4792-936c-830bdc4fc67f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:38.455035 kubelet[2498]: I0913 00:24:38.454999 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:24:38.461220 kubelet[2498]: I0913 00:24:38.460489 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:24:38.461626 containerd[1465]: time="2025-09-13T00:24:38.461590883Z" level=info msg="StopPodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\"" Sep 13 00:24:38.463574 containerd[1465]: time="2025-09-13T00:24:38.463536253Z" level=info msg="Ensure that sandbox e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a in task-service has been cleanup successfully" Sep 13 00:24:38.466975 containerd[1465]: time="2025-09-13T00:24:38.466772594Z" level=info msg="StopPodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\"" Sep 13 00:24:38.467334 kubelet[2498]: I0913 00:24:38.467309 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:24:38.469690 containerd[1465]: time="2025-09-13T00:24:38.469643694Z" level=info msg="StopPodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\"" Sep 13 00:24:38.470454 containerd[1465]: time="2025-09-13T00:24:38.470275018Z" level=info msg="Ensure that sandbox c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f in task-service has been cleanup successfully" Sep 13 00:24:38.470887 containerd[1465]: time="2025-09-13T00:24:38.470858509Z" level=info msg="Ensure that sandbox 7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21 in task-service has been cleanup successfully" Sep 13 00:24:38.475924 kubelet[2498]: I0913 00:24:38.475870 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:24:38.478695 containerd[1465]: time="2025-09-13T00:24:38.478557159Z" level=info msg="StopPodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\"" Sep 13 00:24:38.479247 containerd[1465]: time="2025-09-13T00:24:38.479213030Z" level=info msg="Ensure that sandbox 0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1 in task-service has been cleanup successfully" Sep 13 00:24:38.485499 kubelet[2498]: I0913 00:24:38.484689 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:24:38.487115 containerd[1465]: time="2025-09-13T00:24:38.487056140Z" level=info msg="StopPodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\"" Sep 13 00:24:38.488287 containerd[1465]: time="2025-09-13T00:24:38.488230693Z" level=info msg="Ensure that sandbox cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236 in task-service has been cleanup successfully" Sep 13 00:24:38.491019 kubelet[2498]: I0913 00:24:38.490980 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:24:38.499711 containerd[1465]: time="2025-09-13T00:24:38.499570545Z" level=info msg="StopPodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\"" Sep 13 00:24:38.513178 containerd[1465]: time="2025-09-13T00:24:38.512967278Z" level=info msg="Ensure that sandbox c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8 in task-service has been cleanup successfully" Sep 13 00:24:38.514339 kubelet[2498]: I0913 00:24:38.513838 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:24:38.517646 containerd[1465]: time="2025-09-13T00:24:38.517607693Z" level=info msg="StopPodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\"" Sep 13 00:24:38.520666 containerd[1465]: time="2025-09-13T00:24:38.520250755Z" level=info msg="Ensure that sandbox 05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd in task-service has been cleanup successfully" Sep 13 00:24:38.529036 kubelet[2498]: I0913 00:24:38.528939 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:24:38.533142 containerd[1465]: time="2025-09-13T00:24:38.533086560Z" level=info msg="StopPodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\"" Sep 13 00:24:38.537951 containerd[1465]: time="2025-09-13T00:24:38.537905184Z" level=info msg="Ensure that sandbox 9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4 in task-service has been cleanup successfully" Sep 13 00:24:38.591792 containerd[1465]: time="2025-09-13T00:24:38.591732196Z" level=error msg="StopPodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" failed" error="failed to destroy network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.592103 kubelet[2498]: E0913 00:24:38.591977 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:24:38.592103 kubelet[2498]: E0913 00:24:38.592042 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a"} Sep 13 00:24:38.592184 kubelet[2498]: E0913 00:24:38.592114 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a1f681a-96b5-4792-936c-830bdc4fc67f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.592184 kubelet[2498]: E0913 00:24:38.592140 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a1f681a-96b5-4792-936c-830bdc4fc67f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gm62f" podUID="7a1f681a-96b5-4792-936c-830bdc4fc67f" Sep 13 00:24:38.641425 containerd[1465]: time="2025-09-13T00:24:38.640817311Z" level=error msg="StopPodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" failed" error="failed to destroy network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.641817 containerd[1465]: time="2025-09-13T00:24:38.641008132Z" level=error msg="StopPodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" failed" error="failed to destroy network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.642076 kubelet[2498]: E0913 00:24:38.642025 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:24:38.642234 kubelet[2498]: E0913 00:24:38.642212 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1"} Sep 13 00:24:38.642389 kubelet[2498]: E0913 00:24:38.642320 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb440148-9fbd-4f08-a9ed-06e94ecc9e57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.642389 kubelet[2498]: E0913 00:24:38.642347 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb440148-9fbd-4f08-a9ed-06e94ecc9e57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" podUID="fb440148-9fbd-4f08-a9ed-06e94ecc9e57" Sep 13 00:24:38.643301 kubelet[2498]: E0913 00:24:38.643256 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:24:38.643539 kubelet[2498]: E0913 00:24:38.643419 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236"} Sep 13 00:24:38.643539 kubelet[2498]: E0913 00:24:38.643481 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.643539 kubelet[2498]: E0913 00:24:38.643505 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" podUID="c14c5f57-0bd2-4e4c-bbc8-39406c393d42" Sep 13 00:24:38.645807 containerd[1465]: time="2025-09-13T00:24:38.645752717Z" level=error msg="StopPodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" failed" error="failed to destroy network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.646188 kubelet[2498]: E0913 00:24:38.646021 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:24:38.646188 kubelet[2498]: E0913 00:24:38.646077 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8"} Sep 13 00:24:38.646188 kubelet[2498]: E0913 00:24:38.646113 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88fd7908-d362-45b9-9c05-84c56d420f5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.646188 kubelet[2498]: E0913 00:24:38.646156 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88fd7908-d362-45b9-9c05-84c56d420f5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" podUID="88fd7908-d362-45b9-9c05-84c56d420f5b" Sep 13 00:24:38.652687 containerd[1465]: time="2025-09-13T00:24:38.652522534Z" level=error msg="StopPodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" failed" error="failed to destroy network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.653205 kubelet[2498]: E0913 00:24:38.652985 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:24:38.653205 kubelet[2498]: E0913 00:24:38.653047 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f"} Sep 13 00:24:38.653205 kubelet[2498]: E0913 00:24:38.653084 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4a2b5f2c-0765-434f-910d-07d9f5ff57ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.653401 kubelet[2498]: E0913 00:24:38.653215 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4a2b5f2c-0765-434f-910d-07d9f5ff57ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-nn2np" podUID="4a2b5f2c-0765-434f-910d-07d9f5ff57ab" Sep 13 00:24:38.653889 containerd[1465]: time="2025-09-13T00:24:38.653719661Z" level=error msg="StopPodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" failed" error="failed to destroy network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.654213 kubelet[2498]: E0913 00:24:38.654066 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:24:38.654213 kubelet[2498]: E0913 00:24:38.654113 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21"} Sep 13 00:24:38.654213 kubelet[2498]: E0913 00:24:38.654157 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eedd9846-66f8-4fbc-912d-f953222ec80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.654213 kubelet[2498]: E0913 00:24:38.654179 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eedd9846-66f8-4fbc-912d-f953222ec80b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" podUID="eedd9846-66f8-4fbc-912d-f953222ec80b" Sep 13 00:24:38.669568 containerd[1465]: time="2025-09-13T00:24:38.669490661Z" level=error msg="StopPodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" failed" error="failed to destroy network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.670005 kubelet[2498]: E0913 00:24:38.669772 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:24:38.670005 kubelet[2498]: E0913 00:24:38.669827 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd"} Sep 13 00:24:38.670005 kubelet[2498]: E0913 00:24:38.669863 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c03440a-ff3f-462d-ba46-2398b0c778c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.670005 kubelet[2498]: E0913 00:24:38.669887 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c03440a-ff3f-462d-ba46-2398b0c778c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bmgkp" podUID="3c03440a-ff3f-462d-ba46-2398b0c778c8" Sep 13 00:24:38.676425 containerd[1465]: time="2025-09-13T00:24:38.676379790Z" level=error msg="StopPodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" failed" error="failed to destroy network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:38.676731 kubelet[2498]: E0913 00:24:38.676686 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:24:38.676805 kubelet[2498]: E0913 00:24:38.676750 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4"} Sep 13 00:24:38.676805 kubelet[2498]: E0913 00:24:38.676792 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a5a1c5a-3908-4e95-aa11-b97be572df2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:38.676908 kubelet[2498]: E0913 00:24:38.676816 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a5a1c5a-3908-4e95-aa11-b97be572df2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6tkbb" podUID="8a5a1c5a-3908-4e95-aa11-b97be572df2c" Sep 13 00:24:38.942632 containerd[1465]: time="2025-09-13T00:24:38.942142378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f4f6c884-bhqhf,Uid:fd8eee52-c543-48a2-abe7-510261dd737e,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:39.018477 containerd[1465]: time="2025-09-13T00:24:39.017897940Z" level=error msg="Failed to destroy network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:39.019798 containerd[1465]: time="2025-09-13T00:24:39.019749690Z" level=error msg="encountered an error cleaning up failed sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:39.019897 containerd[1465]: time="2025-09-13T00:24:39.019831127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f4f6c884-bhqhf,Uid:fd8eee52-c543-48a2-abe7-510261dd737e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:39.020118 kubelet[2498]: E0913 00:24:39.020082 2498 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:39.020183 kubelet[2498]: E0913 00:24:39.020147 2498 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69f4f6c884-bhqhf" Sep 13 00:24:39.020183 kubelet[2498]: E0913 00:24:39.020171 2498 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69f4f6c884-bhqhf" Sep 13 00:24:39.020242 kubelet[2498]: E0913 00:24:39.020221 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69f4f6c884-bhqhf_calico-system(fd8eee52-c543-48a2-abe7-510261dd737e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69f4f6c884-bhqhf_calico-system(fd8eee52-c543-48a2-abe7-510261dd737e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69f4f6c884-bhqhf" podUID="fd8eee52-c543-48a2-abe7-510261dd737e" Sep 13 00:24:39.330069 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd-shm.mount: Deactivated successfully. Sep 13 00:24:39.534480 kubelet[2498]: I0913 00:24:39.534430 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:24:39.536348 containerd[1465]: time="2025-09-13T00:24:39.535893434Z" level=info msg="StopPodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\"" Sep 13 00:24:39.536348 containerd[1465]: time="2025-09-13T00:24:39.536086745Z" level=info msg="Ensure that sandbox 7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd in task-service has been cleanup successfully" Sep 13 00:24:39.572214 containerd[1465]: time="2025-09-13T00:24:39.572167427Z" level=error msg="StopPodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" failed" error="failed to destroy network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:39.572723 kubelet[2498]: E0913 00:24:39.572574 2498 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:24:39.572723 kubelet[2498]: E0913 00:24:39.572640 2498 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd"} Sep 13 00:24:39.572723 kubelet[2498]: E0913 00:24:39.572679 2498 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd8eee52-c543-48a2-abe7-510261dd737e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:24:39.572723 kubelet[2498]: E0913 00:24:39.572704 2498 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd8eee52-c543-48a2-abe7-510261dd737e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69f4f6c884-bhqhf" podUID="fd8eee52-c543-48a2-abe7-510261dd737e" Sep 13 00:24:45.656273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286650633.mount: Deactivated successfully. Sep 13 00:24:45.764267 containerd[1465]: time="2025-09-13T00:24:45.761742710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:45.771747 containerd[1465]: time="2025-09-13T00:24:45.746692688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:24:45.775486 containerd[1465]: time="2025-09-13T00:24:45.775401224Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:45.778861 containerd[1465]: time="2025-09-13T00:24:45.776633706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.319618877s" Sep 13 00:24:45.778861 containerd[1465]: time="2025-09-13T00:24:45.776682402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:24:45.782549 containerd[1465]: time="2025-09-13T00:24:45.782026804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:45.821507 containerd[1465]: time="2025-09-13T00:24:45.819717365Z" level=info msg="CreateContainer within sandbox \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:24:45.890933 containerd[1465]: time="2025-09-13T00:24:45.890871542Z" level=info msg="CreateContainer within sandbox \"ee622b34df34fba362e7061133706b49654bd52d3c32524e815925f41e3af135\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a\"" Sep 13 00:24:45.892122 containerd[1465]: time="2025-09-13T00:24:45.892073707Z" level=info msg="StartContainer for \"94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a\"" Sep 13 00:24:46.060132 systemd[1]: Started cri-containerd-94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a.scope - libcontainer container 94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a. Sep 13 00:24:46.104148 containerd[1465]: time="2025-09-13T00:24:46.104105602Z" level=info msg="StartContainer for \"94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a\" returns successfully" Sep 13 00:24:46.232813 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:24:46.234272 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:24:46.452615 containerd[1465]: time="2025-09-13T00:24:46.452097598Z" level=info msg="StopPodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\"" Sep 13 00:24:46.664136 kubelet[2498]: I0913 00:24:46.664041 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vbqd7" podStartSLOduration=2.023502881 podStartE2EDuration="20.654367173s" podCreationTimestamp="2025-09-13 00:24:26 +0000 UTC" firstStartedPulling="2025-09-13 00:24:27.150951897 +0000 UTC m=+22.033021044" lastFinishedPulling="2025-09-13 00:24:45.78181619 +0000 UTC m=+40.663885336" observedRunningTime="2025-09-13 00:24:46.639180868 +0000 UTC m=+41.521250040" watchObservedRunningTime="2025-09-13 00:24:46.654367173 +0000 UTC m=+41.536436382" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.642 [INFO][3767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.648 [INFO][3767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" iface="eth0" netns="/var/run/netns/cni-fe254b57-171e-eda7-8f23-e7eb45aad3cb" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.649 [INFO][3767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" iface="eth0" netns="/var/run/netns/cni-fe254b57-171e-eda7-8f23-e7eb45aad3cb" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.650 [INFO][3767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" iface="eth0" netns="/var/run/netns/cni-fe254b57-171e-eda7-8f23-e7eb45aad3cb" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.650 [INFO][3767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.650 [INFO][3767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.902 [INFO][3774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.908 [INFO][3774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.909 [INFO][3774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.931 [WARNING][3774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.932 [INFO][3774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.934 [INFO][3774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:46.939790 containerd[1465]: 2025-09-13 00:24:46.936 [INFO][3767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:24:46.943998 containerd[1465]: time="2025-09-13T00:24:46.941546683Z" level=info msg="TearDown network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" successfully" Sep 13 00:24:46.943998 containerd[1465]: time="2025-09-13T00:24:46.941596901Z" level=info msg="StopPodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" returns successfully" Sep 13 00:24:46.945171 systemd[1]: run-netns-cni\x2dfe254b57\x2d171e\x2deda7\x2d8f23\x2de7eb45aad3cb.mount: Deactivated successfully. Sep 13 00:24:47.107268 kubelet[2498]: I0913 00:24:47.107212 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8szkt\" (UniqueName: \"kubernetes.io/projected/fd8eee52-c543-48a2-abe7-510261dd737e-kube-api-access-8szkt\") pod \"fd8eee52-c543-48a2-abe7-510261dd737e\" (UID: \"fd8eee52-c543-48a2-abe7-510261dd737e\") " Sep 13 00:24:47.107268 kubelet[2498]: I0913 00:24:47.107268 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-backend-key-pair\") pod \"fd8eee52-c543-48a2-abe7-510261dd737e\" (UID: \"fd8eee52-c543-48a2-abe7-510261dd737e\") " Sep 13 00:24:47.107703 kubelet[2498]: I0913 00:24:47.107306 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-ca-bundle\") pod \"fd8eee52-c543-48a2-abe7-510261dd737e\" (UID: \"fd8eee52-c543-48a2-abe7-510261dd737e\") " Sep 13 00:24:47.110461 kubelet[2498]: I0913 00:24:47.108735 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fd8eee52-c543-48a2-abe7-510261dd737e" (UID: "fd8eee52-c543-48a2-abe7-510261dd737e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:24:47.113707 kubelet[2498]: I0913 00:24:47.113656 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8eee52-c543-48a2-abe7-510261dd737e-kube-api-access-8szkt" (OuterVolumeSpecName: "kube-api-access-8szkt") pod "fd8eee52-c543-48a2-abe7-510261dd737e" (UID: "fd8eee52-c543-48a2-abe7-510261dd737e"). InnerVolumeSpecName "kube-api-access-8szkt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:24:47.114941 systemd[1]: var-lib-kubelet-pods-fd8eee52\x2dc543\x2d48a2\x2dabe7\x2d510261dd737e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8szkt.mount: Deactivated successfully. Sep 13 00:24:47.117692 kubelet[2498]: I0913 00:24:47.117647 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fd8eee52-c543-48a2-abe7-510261dd737e" (UID: "fd8eee52-c543-48a2-abe7-510261dd737e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:24:47.119332 systemd[1]: var-lib-kubelet-pods-fd8eee52\x2dc543\x2d48a2\x2dabe7\x2d510261dd737e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:24:47.213836 kubelet[2498]: I0913 00:24:47.213668 2498 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:24:47.213836 kubelet[2498]: I0913 00:24:47.213762 2498 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd8eee52-c543-48a2-abe7-510261dd737e-whisker-ca-bundle\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:24:47.213836 kubelet[2498]: I0913 00:24:47.213782 2498 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8szkt\" (UniqueName: \"kubernetes.io/projected/fd8eee52-c543-48a2-abe7-510261dd737e-kube-api-access-8szkt\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:24:47.257190 systemd[1]: Removed slice kubepods-besteffort-podfd8eee52_c543_48a2_abe7_510261dd737e.slice - libcontainer container kubepods-besteffort-podfd8eee52_c543_48a2_abe7_510261dd737e.slice. Sep 13 00:24:47.582919 kubelet[2498]: I0913 00:24:47.582644 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:47.672159 systemd[1]: Created slice kubepods-besteffort-pod47200072_f112_46f9_aaf6_bad1c4248e7f.slice - libcontainer container kubepods-besteffort-pod47200072_f112_46f9_aaf6_bad1c4248e7f.slice. Sep 13 00:24:47.717226 kubelet[2498]: I0913 00:24:47.717140 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjs4\" (UniqueName: \"kubernetes.io/projected/47200072-f112-46f9-aaf6-bad1c4248e7f-kube-api-access-qkjs4\") pod \"whisker-54dcf47db7-fsd8q\" (UID: \"47200072-f112-46f9-aaf6-bad1c4248e7f\") " pod="calico-system/whisker-54dcf47db7-fsd8q" Sep 13 00:24:47.717226 kubelet[2498]: I0913 00:24:47.717195 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47200072-f112-46f9-aaf6-bad1c4248e7f-whisker-ca-bundle\") pod \"whisker-54dcf47db7-fsd8q\" (UID: \"47200072-f112-46f9-aaf6-bad1c4248e7f\") " pod="calico-system/whisker-54dcf47db7-fsd8q" Sep 13 00:24:47.717746 kubelet[2498]: I0913 00:24:47.717307 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/47200072-f112-46f9-aaf6-bad1c4248e7f-whisker-backend-key-pair\") pod \"whisker-54dcf47db7-fsd8q\" (UID: \"47200072-f112-46f9-aaf6-bad1c4248e7f\") " pod="calico-system/whisker-54dcf47db7-fsd8q" Sep 13 00:24:47.980761 containerd[1465]: time="2025-09-13T00:24:47.980699024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dcf47db7-fsd8q,Uid:47200072-f112-46f9-aaf6-bad1c4248e7f,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:48.338004 systemd-networkd[1378]: cali452081633ba: Link UP Sep 13 00:24:48.340231 systemd-networkd[1378]: cali452081633ba: Gained carrier Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.112 [INFO][3817] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.133 [INFO][3817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0 whisker-54dcf47db7- calico-system 47200072-f112-46f9-aaf6-bad1c4248e7f 916 0 2025-09-13 00:24:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54dcf47db7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 whisker-54dcf47db7-fsd8q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali452081633ba [] [] }} ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.134 [INFO][3817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.214 [INFO][3873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" HandleID="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.214 [INFO][3873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" HandleID="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"whisker-54dcf47db7-fsd8q", "timestamp":"2025-09-13 00:24:48.213983352 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.214 [INFO][3873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.214 [INFO][3873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.214 [INFO][3873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.231 [INFO][3873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.250 [INFO][3873] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.261 [INFO][3873] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.267 [INFO][3873] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.270 [INFO][3873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.270 [INFO][3873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.274 [INFO][3873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.282 [INFO][3873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.294 [INFO][3873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.129/26] block=192.168.31.128/26 handle="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.295 [INFO][3873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.129/26] handle="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.295 [INFO][3873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:48.378066 containerd[1465]: 2025-09-13 00:24:48.295 [INFO][3873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.129/26] IPv6=[] ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" HandleID="k8s-pod-network.93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.380729 containerd[1465]: 2025-09-13 00:24:48.299 [INFO][3817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0", GenerateName:"whisker-54dcf47db7-", Namespace:"calico-system", SelfLink:"", UID:"47200072-f112-46f9-aaf6-bad1c4248e7f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54dcf47db7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"whisker-54dcf47db7-fsd8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali452081633ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:48.380729 containerd[1465]: 2025-09-13 00:24:48.300 [INFO][3817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.129/32] ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.380729 containerd[1465]: 2025-09-13 00:24:48.300 [INFO][3817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali452081633ba ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.380729 containerd[1465]: 2025-09-13 00:24:48.342 [INFO][3817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.380729 containerd[1465]: 2025-09-13 00:24:48.346 [INFO][3817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0", GenerateName:"whisker-54dcf47db7-", Namespace:"calico-system", SelfLink:"", UID:"47200072-f112-46f9-aaf6-bad1c4248e7f", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54dcf47db7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b", Pod:"whisker-54dcf47db7-fsd8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali452081633ba", MAC:"0a:30:b3:71:a7:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:48.380729 containerd[1465]: 2025-09-13 00:24:48.370 [INFO][3817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b" Namespace="calico-system" Pod="whisker-54dcf47db7-fsd8q" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--54dcf47db7--fsd8q-eth0" Sep 13 00:24:48.479943 containerd[1465]: time="2025-09-13T00:24:48.477965027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:48.479943 containerd[1465]: time="2025-09-13T00:24:48.478049626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:48.479943 containerd[1465]: time="2025-09-13T00:24:48.478067513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:48.479943 containerd[1465]: time="2025-09-13T00:24:48.478205677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:48.535872 systemd[1]: Started cri-containerd-93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b.scope - libcontainer container 93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b. Sep 13 00:24:48.729230 containerd[1465]: time="2025-09-13T00:24:48.729169819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dcf47db7-fsd8q,Uid:47200072-f112-46f9-aaf6-bad1c4248e7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b\"" Sep 13 00:24:48.735821 containerd[1465]: time="2025-09-13T00:24:48.735772267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:24:49.244571 containerd[1465]: time="2025-09-13T00:24:49.244125038Z" level=info msg="StopPodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\"" Sep 13 00:24:49.244571 containerd[1465]: time="2025-09-13T00:24:49.244465002Z" level=info msg="StopPodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\"" Sep 13 00:24:49.250495 kubelet[2498]: I0913 00:24:49.249879 2498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8eee52-c543-48a2-abe7-510261dd737e" path="/var/lib/kubelet/pods/fd8eee52-c543-48a2-abe7-510261dd737e/volumes" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.334 [INFO][3975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.334 [INFO][3975] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" iface="eth0" netns="/var/run/netns/cni-b4712452-936d-7840-b4b6-9f50f1ab820a" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.335 [INFO][3975] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" iface="eth0" netns="/var/run/netns/cni-b4712452-936d-7840-b4b6-9f50f1ab820a" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.336 [INFO][3975] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" iface="eth0" netns="/var/run/netns/cni-b4712452-936d-7840-b4b6-9f50f1ab820a" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.336 [INFO][3975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.336 [INFO][3975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.369 [INFO][3988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.370 [INFO][3988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.370 [INFO][3988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.381 [WARNING][3988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.381 [INFO][3988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.383 [INFO][3988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:49.390011 containerd[1465]: 2025-09-13 00:24:49.386 [INFO][3975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:24:49.390011 containerd[1465]: time="2025-09-13T00:24:49.389732232Z" level=info msg="TearDown network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" successfully" Sep 13 00:24:49.390011 containerd[1465]: time="2025-09-13T00:24:49.389764450Z" level=info msg="StopPodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" returns successfully" Sep 13 00:24:49.392971 systemd[1]: run-netns-cni\x2db4712452\x2d936d\x2d7840\x2db4b6\x2d9f50f1ab820a.mount: Deactivated successfully. Sep 13 00:24:49.393711 containerd[1465]: time="2025-09-13T00:24:49.393170122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-fpvl5,Uid:88fd7908-d362-45b9-9c05-84c56d420f5b,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.331 [INFO][3976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.334 [INFO][3976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" iface="eth0" netns="/var/run/netns/cni-fd465225-29cb-f096-147e-0556dfe7a3cb" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.338 [INFO][3976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" iface="eth0" netns="/var/run/netns/cni-fd465225-29cb-f096-147e-0556dfe7a3cb" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.338 [INFO][3976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" iface="eth0" netns="/var/run/netns/cni-fd465225-29cb-f096-147e-0556dfe7a3cb" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.338 [INFO][3976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.338 [INFO][3976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.388 [INFO][3990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.389 [INFO][3990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.389 [INFO][3990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.403 [WARNING][3990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.403 [INFO][3990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.405 [INFO][3990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:49.410063 containerd[1465]: 2025-09-13 00:24:49.407 [INFO][3976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:24:49.412389 containerd[1465]: time="2025-09-13T00:24:49.411683740Z" level=info msg="TearDown network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" successfully" Sep 13 00:24:49.412389 containerd[1465]: time="2025-09-13T00:24:49.411736187Z" level=info msg="StopPodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" returns successfully" Sep 13 00:24:49.415567 containerd[1465]: time="2025-09-13T00:24:49.414014129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm62f,Uid:7a1f681a-96b5-4792-936c-830bdc4fc67f,Namespace:calico-system,Attempt:1,}" Sep 13 00:24:49.415619 systemd[1]: run-netns-cni\x2dfd465225\x2d29cb\x2df096\x2d147e\x2d0556dfe7a3cb.mount: Deactivated successfully. Sep 13 00:24:49.628392 systemd-networkd[1378]: calidecdea08d61: Link UP Sep 13 00:24:49.628676 systemd-networkd[1378]: calidecdea08d61: Gained carrier Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.514 [INFO][4012] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.531 [INFO][4012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0 csi-node-driver- calico-system 7a1f681a-96b5-4792-936c-830bdc4fc67f 929 0 2025-09-13 00:24:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 csi-node-driver-gm62f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidecdea08d61 [] [] }} ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.531 [INFO][4012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.571 [INFO][4025] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" HandleID="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.571 [INFO][4025] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" HandleID="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"csi-node-driver-gm62f", "timestamp":"2025-09-13 00:24:49.571393245 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.571 [INFO][4025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.571 [INFO][4025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.571 [INFO][4025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.584 [INFO][4025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.591 [INFO][4025] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.597 [INFO][4025] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.600 [INFO][4025] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.603 [INFO][4025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.603 [INFO][4025] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.606 [INFO][4025] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79 Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.611 [INFO][4025] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.618 [INFO][4025] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.130/26] block=192.168.31.128/26 handle="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.618 [INFO][4025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.130/26] handle="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.618 [INFO][4025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:49.653800 containerd[1465]: 2025-09-13 00:24:49.619 [INFO][4025] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.130/26] IPv6=[] ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" HandleID="k8s-pod-network.8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.656063 containerd[1465]: 2025-09-13 00:24:49.623 [INFO][4012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a1f681a-96b5-4792-936c-830bdc4fc67f", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"csi-node-driver-gm62f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidecdea08d61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:49.656063 containerd[1465]: 2025-09-13 00:24:49.623 [INFO][4012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.130/32] ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.656063 containerd[1465]: 2025-09-13 00:24:49.623 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidecdea08d61 ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.656063 containerd[1465]: 2025-09-13 00:24:49.627 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.656063 containerd[1465]: 2025-09-13 00:24:49.628 [INFO][4012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a1f681a-96b5-4792-936c-830bdc4fc67f", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79", Pod:"csi-node-driver-gm62f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidecdea08d61", MAC:"8e:6b:c4:57:f6:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:49.656063 containerd[1465]: 2025-09-13 00:24:49.647 [INFO][4012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79" Namespace="calico-system" Pod="csi-node-driver-gm62f" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:24:49.682786 containerd[1465]: time="2025-09-13T00:24:49.682593009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:49.682786 containerd[1465]: time="2025-09-13T00:24:49.682762630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:49.682786 containerd[1465]: time="2025-09-13T00:24:49.682793552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:49.684503 containerd[1465]: time="2025-09-13T00:24:49.682931218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:49.713842 systemd[1]: Started cri-containerd-8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79.scope - libcontainer container 8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79. Sep 13 00:24:49.744188 systemd-networkd[1378]: cali9e5db3ddf93: Link UP Sep 13 00:24:49.744980 systemd-networkd[1378]: cali9e5db3ddf93: Gained carrier Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.514 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.535 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0 calico-apiserver-66fc9d466c- calico-apiserver 88fd7908-d362-45b9-9c05-84c56d420f5b 930 0 2025-09-13 00:24:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66fc9d466c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 calico-apiserver-66fc9d466c-fpvl5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e5db3ddf93 [] [] }} ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.535 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.582 [INFO][4030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.583 [INFO][4030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"calico-apiserver-66fc9d466c-fpvl5", "timestamp":"2025-09-13 00:24:49.582621182 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.583 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.618 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.619 [INFO][4030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.684 [INFO][4030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.693 [INFO][4030] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.702 [INFO][4030] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.710 [INFO][4030] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.715 [INFO][4030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.715 [INFO][4030] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.719 [INFO][4030] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790 Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.725 [INFO][4030] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.734 [INFO][4030] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.131/26] block=192.168.31.128/26 handle="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.734 [INFO][4030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.131/26] handle="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.734 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:49.773083 containerd[1465]: 2025-09-13 00:24:49.734 [INFO][4030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.131/26] IPv6=[] ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.774199 containerd[1465]: 2025-09-13 00:24:49.737 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"88fd7908-d362-45b9-9c05-84c56d420f5b", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"calico-apiserver-66fc9d466c-fpvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e5db3ddf93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:49.774199 containerd[1465]: 2025-09-13 00:24:49.738 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.131/32] ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.774199 containerd[1465]: 2025-09-13 00:24:49.738 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e5db3ddf93 ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.774199 containerd[1465]: 2025-09-13 00:24:49.746 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.774199 containerd[1465]: 2025-09-13 00:24:49.747 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"88fd7908-d362-45b9-9c05-84c56d420f5b", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790", Pod:"calico-apiserver-66fc9d466c-fpvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e5db3ddf93", MAC:"06:ad:29:64:ec:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:49.774199 containerd[1465]: 2025-09-13 00:24:49.767 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-fpvl5" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:24:49.821031 containerd[1465]: time="2025-09-13T00:24:49.820990169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gm62f,Uid:7a1f681a-96b5-4792-936c-830bdc4fc67f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79\"" Sep 13 00:24:49.839067 containerd[1465]: time="2025-09-13T00:24:49.838248401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:49.839067 containerd[1465]: time="2025-09-13T00:24:49.838327017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:49.839067 containerd[1465]: time="2025-09-13T00:24:49.838342466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:49.839067 containerd[1465]: time="2025-09-13T00:24:49.838477598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:49.885307 systemd[1]: run-containerd-runc-k8s.io-43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790-runc.4QyRBg.mount: Deactivated successfully. Sep 13 00:24:49.901681 systemd[1]: Started cri-containerd-43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790.scope - libcontainer container 43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790. Sep 13 00:24:49.997603 containerd[1465]: time="2025-09-13T00:24:49.997282260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-fpvl5,Uid:88fd7908-d362-45b9-9c05-84c56d420f5b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\"" Sep 13 00:24:50.155890 systemd-networkd[1378]: cali452081633ba: Gained IPv6LL Sep 13 00:24:50.224207 containerd[1465]: time="2025-09-13T00:24:50.223424982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:50.225762 containerd[1465]: time="2025-09-13T00:24:50.225699953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:24:50.228256 containerd[1465]: time="2025-09-13T00:24:50.228209418Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:50.230112 containerd[1465]: time="2025-09-13T00:24:50.230080383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:50.231926 containerd[1465]: time="2025-09-13T00:24:50.231867711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.496043239s" Sep 13 00:24:50.232130 containerd[1465]: time="2025-09-13T00:24:50.232101735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:24:50.233792 containerd[1465]: time="2025-09-13T00:24:50.233515343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:24:50.236763 containerd[1465]: time="2025-09-13T00:24:50.236720237Z" level=info msg="CreateContainer within sandbox \"93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:24:50.249266 containerd[1465]: time="2025-09-13T00:24:50.249002947Z" level=info msg="StopPodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\"" Sep 13 00:24:50.251581 containerd[1465]: time="2025-09-13T00:24:50.251105757Z" level=info msg="StopPodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\"" Sep 13 00:24:50.257358 containerd[1465]: time="2025-09-13T00:24:50.257302637Z" level=info msg="CreateContainer within sandbox \"93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1fe9f1a129dc2426eff48506b0f23a1abc6a1562524fd8ee2a9a23f605a08915\"" Sep 13 00:24:50.260066 containerd[1465]: time="2025-09-13T00:24:50.259931418Z" level=info msg="StartContainer for \"1fe9f1a129dc2426eff48506b0f23a1abc6a1562524fd8ee2a9a23f605a08915\"" Sep 13 00:24:50.306932 systemd[1]: Started cri-containerd-1fe9f1a129dc2426eff48506b0f23a1abc6a1562524fd8ee2a9a23f605a08915.scope - libcontainer container 1fe9f1a129dc2426eff48506b0f23a1abc6a1562524fd8ee2a9a23f605a08915. Sep 13 00:24:50.420753 containerd[1465]: time="2025-09-13T00:24:50.420619727Z" level=info msg="StartContainer for \"1fe9f1a129dc2426eff48506b0f23a1abc6a1562524fd8ee2a9a23f605a08915\" returns successfully" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.392 [INFO][4173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.392 [INFO][4173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" iface="eth0" netns="/var/run/netns/cni-446c26be-a859-cc5b-33fd-4dc2e7b4899b" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.398 [INFO][4173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" iface="eth0" netns="/var/run/netns/cni-446c26be-a859-cc5b-33fd-4dc2e7b4899b" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.399 [INFO][4173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" iface="eth0" netns="/var/run/netns/cni-446c26be-a859-cc5b-33fd-4dc2e7b4899b" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.399 [INFO][4173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.399 [INFO][4173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.441 [INFO][4220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.441 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.441 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.453 [WARNING][4220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.453 [INFO][4220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.457 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:50.463596 containerd[1465]: 2025-09-13 00:24:50.460 [INFO][4173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:24:50.466917 containerd[1465]: time="2025-09-13T00:24:50.465551603Z" level=info msg="TearDown network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" successfully" Sep 13 00:24:50.466917 containerd[1465]: time="2025-09-13T00:24:50.465603510Z" level=info msg="StopPodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" returns successfully" Sep 13 00:24:50.470361 kubelet[2498]: E0913 00:24:50.469170 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:50.470861 containerd[1465]: time="2025-09-13T00:24:50.470630343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkbb,Uid:8a5a1c5a-3908-4e95-aa11-b97be572df2c,Namespace:kube-system,Attempt:1,}" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.402 [INFO][4183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.404 [INFO][4183] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" iface="eth0" netns="/var/run/netns/cni-7037cb68-5b99-a10f-30a5-a3c977ed94e9" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.407 [INFO][4183] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" iface="eth0" netns="/var/run/netns/cni-7037cb68-5b99-a10f-30a5-a3c977ed94e9" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.410 [INFO][4183] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" iface="eth0" netns="/var/run/netns/cni-7037cb68-5b99-a10f-30a5-a3c977ed94e9" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.410 [INFO][4183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.410 [INFO][4183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.509 [INFO][4226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.509 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.509 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.522 [WARNING][4226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.522 [INFO][4226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.528 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:50.538896 containerd[1465]: 2025-09-13 00:24:50.531 [INFO][4183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:24:50.538896 containerd[1465]: time="2025-09-13T00:24:50.538109566Z" level=info msg="TearDown network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" successfully" Sep 13 00:24:50.538896 containerd[1465]: time="2025-09-13T00:24:50.538148287Z" level=info msg="StopPodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" returns successfully" Sep 13 00:24:50.542508 containerd[1465]: time="2025-09-13T00:24:50.539860989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-2qnrs,Uid:c14c5f57-0bd2-4e4c-bbc8-39406c393d42,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:24:50.713052 systemd-networkd[1378]: calide42c69c7a5: Link UP Sep 13 00:24:50.715484 systemd-networkd[1378]: calide42c69c7a5: Gained carrier Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.571 [INFO][4236] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.589 [INFO][4236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0 coredns-668d6bf9bc- kube-system 8a5a1c5a-3908-4e95-aa11-b97be572df2c 946 0 2025-09-13 00:24:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 coredns-668d6bf9bc-6tkbb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide42c69c7a5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.590 [INFO][4236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.640 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" HandleID="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.640 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" HandleID="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"coredns-668d6bf9bc-6tkbb", "timestamp":"2025-09-13 00:24:50.640079461 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.640 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.640 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.640 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.652 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.660 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.671 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.674 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.679 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.679 [INFO][4264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.682 [INFO][4264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.691 [INFO][4264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.701 [INFO][4264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.132/26] block=192.168.31.128/26 handle="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.702 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.132/26] handle="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.702 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:50.739636 containerd[1465]: 2025-09-13 00:24:50.702 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.132/26] IPv6=[] ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" HandleID="k8s-pod-network.6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.741949 containerd[1465]: 2025-09-13 00:24:50.706 [INFO][4236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a5a1c5a-3908-4e95-aa11-b97be572df2c", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"coredns-668d6bf9bc-6tkbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide42c69c7a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:50.741949 containerd[1465]: 2025-09-13 00:24:50.706 [INFO][4236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.132/32] ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.741949 containerd[1465]: 2025-09-13 00:24:50.706 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide42c69c7a5 ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.741949 containerd[1465]: 2025-09-13 00:24:50.718 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.741949 containerd[1465]: 2025-09-13 00:24:50.719 [INFO][4236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a5a1c5a-3908-4e95-aa11-b97be572df2c", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b", Pod:"coredns-668d6bf9bc-6tkbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide42c69c7a5", MAC:"22:89:05:e8:c1:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:50.741949 containerd[1465]: 2025-09-13 00:24:50.737 [INFO][4236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b" Namespace="kube-system" Pod="coredns-668d6bf9bc-6tkbb" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:24:50.778973 containerd[1465]: time="2025-09-13T00:24:50.771666483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:50.778973 containerd[1465]: time="2025-09-13T00:24:50.771908880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:50.778973 containerd[1465]: time="2025-09-13T00:24:50.771926001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:50.778973 containerd[1465]: time="2025-09-13T00:24:50.772312218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:50.808037 systemd[1]: Started cri-containerd-6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b.scope - libcontainer container 6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b. Sep 13 00:24:50.825523 systemd-networkd[1378]: caliab305a2645b: Link UP Sep 13 00:24:50.827128 systemd-networkd[1378]: caliab305a2645b: Gained carrier Sep 13 00:24:50.840316 systemd[1]: run-netns-cni\x2d7037cb68\x2d5b99\x2da10f\x2d30a5\x2da3c977ed94e9.mount: Deactivated successfully. Sep 13 00:24:50.840420 systemd[1]: run-netns-cni\x2d446c26be\x2da859\x2dcc5b\x2d33fd\x2d4dc2e7b4899b.mount: Deactivated successfully. Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.622 [INFO][4254] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.649 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0 calico-apiserver-66fc9d466c- calico-apiserver c14c5f57-0bd2-4e4c-bbc8-39406c393d42 947 0 2025-09-13 00:24:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66fc9d466c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 calico-apiserver-66fc9d466c-2qnrs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab305a2645b [] [] }} ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.649 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.704 [INFO][4272] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.704 [INFO][4272] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"calico-apiserver-66fc9d466c-2qnrs", "timestamp":"2025-09-13 00:24:50.704690027 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.704 [INFO][4272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.704 [INFO][4272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.704 [INFO][4272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.753 [INFO][4272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.762 [INFO][4272] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.780 [INFO][4272] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.784 [INFO][4272] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.788 [INFO][4272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.789 [INFO][4272] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.792 [INFO][4272] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2 Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.802 [INFO][4272] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.817 [INFO][4272] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.133/26] block=192.168.31.128/26 handle="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.817 [INFO][4272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.133/26] handle="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.818 [INFO][4272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:50.848801 containerd[1465]: 2025-09-13 00:24:50.818 [INFO][4272] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.133/26] IPv6=[] ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.853008 containerd[1465]: 2025-09-13 00:24:50.821 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c14c5f57-0bd2-4e4c-bbc8-39406c393d42", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"calico-apiserver-66fc9d466c-2qnrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab305a2645b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:50.853008 containerd[1465]: 2025-09-13 00:24:50.821 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.133/32] ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.853008 containerd[1465]: 2025-09-13 00:24:50.821 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab305a2645b ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.853008 containerd[1465]: 2025-09-13 00:24:50.826 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.853008 containerd[1465]: 2025-09-13 00:24:50.826 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c14c5f57-0bd2-4e4c-bbc8-39406c393d42", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2", Pod:"calico-apiserver-66fc9d466c-2qnrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab305a2645b", MAC:"2e:5c:c1:bc:99:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:50.853008 containerd[1465]: 2025-09-13 00:24:50.844 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Namespace="calico-apiserver" Pod="calico-apiserver-66fc9d466c-2qnrs" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:24:50.884748 containerd[1465]: time="2025-09-13T00:24:50.884284553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:50.884748 containerd[1465]: time="2025-09-13T00:24:50.884356308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:50.884748 containerd[1465]: time="2025-09-13T00:24:50.884371116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:50.884748 containerd[1465]: time="2025-09-13T00:24:50.884475335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:50.902483 containerd[1465]: time="2025-09-13T00:24:50.901419893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6tkbb,Uid:8a5a1c5a-3908-4e95-aa11-b97be572df2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b\"" Sep 13 00:24:50.905573 kubelet[2498]: E0913 00:24:50.904601 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:50.918303 containerd[1465]: time="2025-09-13T00:24:50.918265916Z" level=info msg="CreateContainer within sandbox \"6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:24:50.921699 systemd[1]: Started cri-containerd-7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2.scope - libcontainer container 7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2. Sep 13 00:24:50.943854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267359685.mount: Deactivated successfully. Sep 13 00:24:50.947607 containerd[1465]: time="2025-09-13T00:24:50.947419671Z" level=info msg="CreateContainer within sandbox \"6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c369b0a0c1e7d7c5565775f2f38115d2b571b6d448d1c81809a18a15c1ff0b20\"" Sep 13 00:24:50.949979 containerd[1465]: time="2025-09-13T00:24:50.948763021Z" level=info msg="StartContainer for \"c369b0a0c1e7d7c5565775f2f38115d2b571b6d448d1c81809a18a15c1ff0b20\"" Sep 13 00:24:51.008724 systemd[1]: Started cri-containerd-c369b0a0c1e7d7c5565775f2f38115d2b571b6d448d1c81809a18a15c1ff0b20.scope - libcontainer container c369b0a0c1e7d7c5565775f2f38115d2b571b6d448d1c81809a18a15c1ff0b20. Sep 13 00:24:51.019278 containerd[1465]: time="2025-09-13T00:24:51.019016310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66fc9d466c-2qnrs,Uid:c14c5f57-0bd2-4e4c-bbc8-39406c393d42,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\"" Sep 13 00:24:51.055684 containerd[1465]: time="2025-09-13T00:24:51.055624414Z" level=info msg="StartContainer for \"c369b0a0c1e7d7c5565775f2f38115d2b571b6d448d1c81809a18a15c1ff0b20\" returns successfully" Sep 13 00:24:51.112870 systemd-networkd[1378]: calidecdea08d61: Gained IPv6LL Sep 13 00:24:51.246358 containerd[1465]: time="2025-09-13T00:24:51.246311051Z" level=info msg="StopPodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\"" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.328 [INFO][4439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.329 [INFO][4439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" iface="eth0" netns="/var/run/netns/cni-c6d016b9-9bb9-3f45-42ea-f6cd4cf0c20e" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.329 [INFO][4439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" iface="eth0" netns="/var/run/netns/cni-c6d016b9-9bb9-3f45-42ea-f6cd4cf0c20e" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.330 [INFO][4439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" iface="eth0" netns="/var/run/netns/cni-c6d016b9-9bb9-3f45-42ea-f6cd4cf0c20e" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.330 [INFO][4439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.330 [INFO][4439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.363 [INFO][4447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.363 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.363 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.369 [WARNING][4447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.369 [INFO][4447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.372 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:51.376846 containerd[1465]: 2025-09-13 00:24:51.374 [INFO][4439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:24:51.378892 containerd[1465]: time="2025-09-13T00:24:51.377526584Z" level=info msg="TearDown network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" successfully" Sep 13 00:24:51.378892 containerd[1465]: time="2025-09-13T00:24:51.377570434Z" level=info msg="StopPodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" returns successfully" Sep 13 00:24:51.378892 containerd[1465]: time="2025-09-13T00:24:51.378511328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d86d44bf-ff8mw,Uid:eedd9846-66f8-4fbc-912d-f953222ec80b,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:24:51.539015 systemd-networkd[1378]: cali398d550cbde: Link UP Sep 13 00:24:51.540210 systemd-networkd[1378]: cali398d550cbde: Gained carrier Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.425 [INFO][4453] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.438 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0 calico-apiserver-d86d44bf- calico-apiserver eedd9846-66f8-4fbc-912d-f953222ec80b 963 0 2025-09-13 00:24:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d86d44bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 calico-apiserver-d86d44bf-ff8mw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali398d550cbde [] [] }} ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.438 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.476 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" HandleID="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.476 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" HandleID="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"calico-apiserver-d86d44bf-ff8mw", "timestamp":"2025-09-13 00:24:51.476578267 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.476 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.477 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.477 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.486 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.493 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.500 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.503 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.508 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.509 [INFO][4466] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.512 [INFO][4466] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198 Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.518 [INFO][4466] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.527 [INFO][4466] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.134/26] block=192.168.31.128/26 handle="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.527 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.134/26] handle="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.527 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:51.563035 containerd[1465]: 2025-09-13 00:24:51.527 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.134/26] IPv6=[] ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" HandleID="k8s-pod-network.c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.563855 containerd[1465]: 2025-09-13 00:24:51.532 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0", GenerateName:"calico-apiserver-d86d44bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eedd9846-66f8-4fbc-912d-f953222ec80b", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d86d44bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"calico-apiserver-d86d44bf-ff8mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali398d550cbde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:51.563855 containerd[1465]: 2025-09-13 00:24:51.532 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.134/32] ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.563855 containerd[1465]: 2025-09-13 00:24:51.532 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali398d550cbde ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.563855 containerd[1465]: 2025-09-13 00:24:51.540 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.563855 containerd[1465]: 2025-09-13 00:24:51.540 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0", GenerateName:"calico-apiserver-d86d44bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eedd9846-66f8-4fbc-912d-f953222ec80b", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d86d44bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198", Pod:"calico-apiserver-d86d44bf-ff8mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali398d550cbde", MAC:"0a:dc:63:52:9f:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:51.563855 containerd[1465]: 2025-09-13 00:24:51.558 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-ff8mw" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:24:51.596904 containerd[1465]: time="2025-09-13T00:24:51.595254706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:51.596904 containerd[1465]: time="2025-09-13T00:24:51.596237707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:51.596904 containerd[1465]: time="2025-09-13T00:24:51.596258826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:51.596904 containerd[1465]: time="2025-09-13T00:24:51.596390098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:51.625456 kubelet[2498]: E0913 00:24:51.623155 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:51.628977 systemd-networkd[1378]: cali9e5db3ddf93: Gained IPv6LL Sep 13 00:24:51.639939 systemd[1]: Started cri-containerd-c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198.scope - libcontainer container c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198. Sep 13 00:24:51.678874 kubelet[2498]: I0913 00:24:51.659384 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6tkbb" podStartSLOduration=40.659347727 podStartE2EDuration="40.659347727s" podCreationTimestamp="2025-09-13 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:51.654486258 +0000 UTC m=+46.536555423" watchObservedRunningTime="2025-09-13 00:24:51.659347727 +0000 UTC m=+46.541416895" Sep 13 00:24:51.835164 systemd[1]: run-netns-cni\x2dc6d016b9\x2d9bb9\x2d3f45\x2d42ea\x2df6cd4cf0c20e.mount: Deactivated successfully. Sep 13 00:24:51.900957 containerd[1465]: time="2025-09-13T00:24:51.900810213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d86d44bf-ff8mw,Uid:eedd9846-66f8-4fbc-912d-f953222ec80b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198\"" Sep 13 00:24:51.936749 containerd[1465]: time="2025-09-13T00:24:51.936691995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:51.937892 containerd[1465]: time="2025-09-13T00:24:51.937825564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:24:51.938758 containerd[1465]: time="2025-09-13T00:24:51.938711088Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:51.942073 kubelet[2498]: I0913 00:24:51.941833 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:51.943346 kubelet[2498]: E0913 00:24:51.942762 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:51.944359 containerd[1465]: time="2025-09-13T00:24:51.944024101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:51.945131 containerd[1465]: time="2025-09-13T00:24:51.944964469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.711407877s" Sep 13 00:24:51.945131 containerd[1465]: time="2025-09-13T00:24:51.945002944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:24:51.951029 containerd[1465]: time="2025-09-13T00:24:51.950984202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:24:51.955585 containerd[1465]: time="2025-09-13T00:24:51.955539471Z" level=info msg="CreateContainer within sandbox \"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:24:51.996328 containerd[1465]: time="2025-09-13T00:24:51.996262383Z" level=info msg="CreateContainer within sandbox \"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"994d86ce0ab2c013ef745c40d47043c351f4259bbbec0a8ff5ffab689dedc043\"" Sep 13 00:24:52.000418 containerd[1465]: time="2025-09-13T00:24:52.000378740Z" level=info msg="StartContainer for \"994d86ce0ab2c013ef745c40d47043c351f4259bbbec0a8ff5ffab689dedc043\"" Sep 13 00:24:52.055770 systemd[1]: Started cri-containerd-994d86ce0ab2c013ef745c40d47043c351f4259bbbec0a8ff5ffab689dedc043.scope - libcontainer container 994d86ce0ab2c013ef745c40d47043c351f4259bbbec0a8ff5ffab689dedc043. Sep 13 00:24:52.103227 containerd[1465]: time="2025-09-13T00:24:52.103150155Z" level=info msg="StartContainer for \"994d86ce0ab2c013ef745c40d47043c351f4259bbbec0a8ff5ffab689dedc043\" returns successfully" Sep 13 00:24:52.135769 systemd-networkd[1378]: caliab305a2645b: Gained IPv6LL Sep 13 00:24:52.243718 containerd[1465]: time="2025-09-13T00:24:52.243567260Z" level=info msg="StopPodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\"" Sep 13 00:24:52.245118 containerd[1465]: time="2025-09-13T00:24:52.245068378Z" level=info msg="StopPodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\"" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.404 [INFO][4589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.405 [INFO][4589] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" iface="eth0" netns="/var/run/netns/cni-035d704e-f2ed-c8dc-e01c-00dd54dcfc55" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.405 [INFO][4589] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" iface="eth0" netns="/var/run/netns/cni-035d704e-f2ed-c8dc-e01c-00dd54dcfc55" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.405 [INFO][4589] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" iface="eth0" netns="/var/run/netns/cni-035d704e-f2ed-c8dc-e01c-00dd54dcfc55" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.405 [INFO][4589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.405 [INFO][4589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.486 [INFO][4612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.486 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.486 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.497 [WARNING][4612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.497 [INFO][4612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.500 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:52.512854 containerd[1465]: 2025-09-13 00:24:52.507 [INFO][4589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:24:52.515941 containerd[1465]: time="2025-09-13T00:24:52.515794567Z" level=info msg="TearDown network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" successfully" Sep 13 00:24:52.515941 containerd[1465]: time="2025-09-13T00:24:52.515935771Z" level=info msg="StopPodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" returns successfully" Sep 13 00:24:52.521146 systemd[1]: run-netns-cni\x2d035d704e\x2df2ed\x2dc8dc\x2de01c\x2d00dd54dcfc55.mount: Deactivated successfully. Sep 13 00:24:52.523898 containerd[1465]: time="2025-09-13T00:24:52.523848166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nn2np,Uid:4a2b5f2c-0765-434f-910d-07d9f5ff57ab,Namespace:calico-system,Attempt:1,}" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.405 [INFO][4587] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.406 [INFO][4587] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" iface="eth0" netns="/var/run/netns/cni-8b556666-673e-0cbd-4ebe-f48995c17f98" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.406 [INFO][4587] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" iface="eth0" netns="/var/run/netns/cni-8b556666-673e-0cbd-4ebe-f48995c17f98" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.406 [INFO][4587] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" iface="eth0" netns="/var/run/netns/cni-8b556666-673e-0cbd-4ebe-f48995c17f98" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.407 [INFO][4587] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.407 [INFO][4587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.487 [INFO][4614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.488 [INFO][4614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.500 [INFO][4614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.516 [WARNING][4614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.516 [INFO][4614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.525 [INFO][4614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:52.533956 containerd[1465]: 2025-09-13 00:24:52.529 [INFO][4587] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:24:52.536161 containerd[1465]: time="2025-09-13T00:24:52.535738186Z" level=info msg="TearDown network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" successfully" Sep 13 00:24:52.536161 containerd[1465]: time="2025-09-13T00:24:52.535769974Z" level=info msg="StopPodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" returns successfully" Sep 13 00:24:52.536834 containerd[1465]: time="2025-09-13T00:24:52.536523822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856fbd7bbd-gmpj2,Uid:fb440148-9fbd-4f08-a9ed-06e94ecc9e57,Namespace:calico-system,Attempt:1,}" Sep 13 00:24:52.647789 systemd-networkd[1378]: calide42c69c7a5: Gained IPv6LL Sep 13 00:24:52.670781 kubelet[2498]: E0913 00:24:52.668963 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:52.670781 kubelet[2498]: E0913 00:24:52.669761 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:52.797461 systemd-networkd[1378]: cali16fb5d985e3: Link UP Sep 13 00:24:52.799997 systemd-networkd[1378]: cali16fb5d985e3: Gained carrier Sep 13 00:24:52.838449 systemd[1]: run-netns-cni\x2d8b556666\x2d673e\x2d0cbd\x2d4ebe\x2df48995c17f98.mount: Deactivated successfully. Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.597 [INFO][4645] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.622 [INFO][4645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0 goldmane-54d579b49d- calico-system 4a2b5f2c-0765-434f-910d-07d9f5ff57ab 994 0 2025-09-13 00:24:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 goldmane-54d579b49d-nn2np eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali16fb5d985e3 [] [] }} ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.622 [INFO][4645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.708 [INFO][4673] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" HandleID="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.709 [INFO][4673] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" HandleID="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"goldmane-54d579b49d-nn2np", "timestamp":"2025-09-13 00:24:52.708458133 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.709 [INFO][4673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.710 [INFO][4673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.710 [INFO][4673] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.721 [INFO][4673] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.729 [INFO][4673] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.744 [INFO][4673] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.751 [INFO][4673] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.758 [INFO][4673] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.758 [INFO][4673] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.767 [INFO][4673] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50 Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.774 [INFO][4673] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.784 [INFO][4673] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.135/26] block=192.168.31.128/26 handle="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.784 [INFO][4673] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.135/26] handle="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.786 [INFO][4673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:52.843154 containerd[1465]: 2025-09-13 00:24:52.786 [INFO][4673] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.135/26] IPv6=[] ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" HandleID="k8s-pod-network.fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.844569 containerd[1465]: 2025-09-13 00:24:52.792 [INFO][4645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4a2b5f2c-0765-434f-910d-07d9f5ff57ab", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"goldmane-54d579b49d-nn2np", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16fb5d985e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:52.844569 containerd[1465]: 2025-09-13 00:24:52.792 [INFO][4645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.135/32] ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.844569 containerd[1465]: 2025-09-13 00:24:52.793 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16fb5d985e3 ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.844569 containerd[1465]: 2025-09-13 00:24:52.797 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.844569 containerd[1465]: 2025-09-13 00:24:52.798 [INFO][4645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4a2b5f2c-0765-434f-910d-07d9f5ff57ab", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50", Pod:"goldmane-54d579b49d-nn2np", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16fb5d985e3", MAC:"9a:98:78:a2:f2:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:52.844569 containerd[1465]: 2025-09-13 00:24:52.828 [INFO][4645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50" Namespace="calico-system" Pod="goldmane-54d579b49d-nn2np" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:24:52.885719 containerd[1465]: time="2025-09-13T00:24:52.884698767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:52.885719 containerd[1465]: time="2025-09-13T00:24:52.884783235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:52.885719 containerd[1465]: time="2025-09-13T00:24:52.884798124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:52.885719 containerd[1465]: time="2025-09-13T00:24:52.884904649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:52.907086 systemd-networkd[1378]: cali2b7aacc3df4: Link UP Sep 13 00:24:52.908511 systemd-networkd[1378]: cali2b7aacc3df4: Gained carrier Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.657 [INFO][4657] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.704 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0 calico-kube-controllers-856fbd7bbd- calico-system fb440148-9fbd-4f08-a9ed-06e94ecc9e57 993 0 2025-09-13 00:24:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:856fbd7bbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 calico-kube-controllers-856fbd7bbd-gmpj2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b7aacc3df4 [] [] }} ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.704 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.773 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" HandleID="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.773 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" HandleID="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"calico-kube-controllers-856fbd7bbd-gmpj2", "timestamp":"2025-09-13 00:24:52.771203509 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.774 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.785 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.785 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.825 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.847 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.859 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.863 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.870 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.872 [INFO][4682] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.878 [INFO][4682] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.887 [INFO][4682] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.899 [INFO][4682] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.136/26] block=192.168.31.128/26 handle="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.899 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.136/26] handle="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.899 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:52.932513 containerd[1465]: 2025-09-13 00:24:52.899 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.136/26] IPv6=[] ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" HandleID="k8s-pod-network.6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.934605 containerd[1465]: 2025-09-13 00:24:52.902 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0", GenerateName:"calico-kube-controllers-856fbd7bbd-", Namespace:"calico-system", SelfLink:"", UID:"fb440148-9fbd-4f08-a9ed-06e94ecc9e57", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856fbd7bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"calico-kube-controllers-856fbd7bbd-gmpj2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b7aacc3df4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:52.934605 containerd[1465]: 2025-09-13 00:24:52.902 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.136/32] ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.934605 containerd[1465]: 2025-09-13 00:24:52.902 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b7aacc3df4 ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.934605 containerd[1465]: 2025-09-13 00:24:52.909 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.934605 containerd[1465]: 2025-09-13 00:24:52.910 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0", GenerateName:"calico-kube-controllers-856fbd7bbd-", Namespace:"calico-system", SelfLink:"", UID:"fb440148-9fbd-4f08-a9ed-06e94ecc9e57", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856fbd7bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc", Pod:"calico-kube-controllers-856fbd7bbd-gmpj2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b7aacc3df4", MAC:"d6:d8:28:55:4a:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:52.934605 containerd[1465]: 2025-09-13 00:24:52.925 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc" Namespace="calico-system" Pod="calico-kube-controllers-856fbd7bbd-gmpj2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:24:52.937665 systemd[1]: Started cri-containerd-fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50.scope - libcontainer container fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50. Sep 13 00:24:52.964656 containerd[1465]: time="2025-09-13T00:24:52.963813931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:52.964819 containerd[1465]: time="2025-09-13T00:24:52.964651794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:52.964819 containerd[1465]: time="2025-09-13T00:24:52.964700963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:52.964944 containerd[1465]: time="2025-09-13T00:24:52.964917649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:52.995702 systemd[1]: Started cri-containerd-6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc.scope - libcontainer container 6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc. Sep 13 00:24:53.006507 containerd[1465]: time="2025-09-13T00:24:53.006082236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nn2np,Uid:4a2b5f2c-0765-434f-910d-07d9f5ff57ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50\"" Sep 13 00:24:53.085641 containerd[1465]: time="2025-09-13T00:24:53.085518084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856fbd7bbd-gmpj2,Uid:fb440148-9fbd-4f08-a9ed-06e94ecc9e57,Namespace:calico-system,Attempt:1,} returns sandbox id \"6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc\"" Sep 13 00:24:53.159690 systemd-networkd[1378]: cali398d550cbde: Gained IPv6LL Sep 13 00:24:53.237259 kernel: bpftool[4802]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:24:53.245822 containerd[1465]: time="2025-09-13T00:24:53.245781157Z" level=info msg="StopPodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\"" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.331 [INFO][4812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.332 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" iface="eth0" netns="/var/run/netns/cni-48fdce62-af90-e82b-1f2b-9402c90fd8ba" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.332 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" iface="eth0" netns="/var/run/netns/cni-48fdce62-af90-e82b-1f2b-9402c90fd8ba" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.333 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" iface="eth0" netns="/var/run/netns/cni-48fdce62-af90-e82b-1f2b-9402c90fd8ba" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.333 [INFO][4812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.333 [INFO][4812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.372 [INFO][4819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.372 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.373 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.380 [WARNING][4819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.380 [INFO][4819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.382 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:53.389323 containerd[1465]: 2025-09-13 00:24:53.385 [INFO][4812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:24:53.390743 containerd[1465]: time="2025-09-13T00:24:53.389601814Z" level=info msg="TearDown network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" successfully" Sep 13 00:24:53.390743 containerd[1465]: time="2025-09-13T00:24:53.389636817Z" level=info msg="StopPodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" returns successfully" Sep 13 00:24:53.390811 kubelet[2498]: E0913 00:24:53.389984 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:53.391501 containerd[1465]: time="2025-09-13T00:24:53.391419054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bmgkp,Uid:3c03440a-ff3f-462d-ba46-2398b0c778c8,Namespace:kube-system,Attempt:1,}" Sep 13 00:24:53.645733 systemd-networkd[1378]: cali39a33e31324: Link UP Sep 13 00:24:53.647574 systemd-networkd[1378]: cali39a33e31324: Gained carrier Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.527 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0 coredns-668d6bf9bc- kube-system 3c03440a-ff3f-462d-ba46-2398b0c778c8 1006 0 2025-09-13 00:24:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 coredns-668d6bf9bc-bmgkp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39a33e31324 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.528 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.584 [INFO][4838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" HandleID="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.584 [INFO][4838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" HandleID="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332490), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"coredns-668d6bf9bc-bmgkp", "timestamp":"2025-09-13 00:24:53.584013375 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.584 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.584 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.584 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.594 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.600 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.605 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.608 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.614 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.614 [INFO][4838] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.616 [INFO][4838] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.623 [INFO][4838] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.634 [INFO][4838] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.137/26] block=192.168.31.128/26 handle="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.634 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.137/26] handle="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.634 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:24:53.692468 containerd[1465]: 2025-09-13 00:24:53.634 [INFO][4838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.137/26] IPv6=[] ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" HandleID="k8s-pod-network.f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.693514 containerd[1465]: 2025-09-13 00:24:53.640 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3c03440a-ff3f-462d-ba46-2398b0c778c8", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"coredns-668d6bf9bc-bmgkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39a33e31324", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:53.693514 containerd[1465]: 2025-09-13 00:24:53.640 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.137/32] ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.693514 containerd[1465]: 2025-09-13 00:24:53.640 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39a33e31324 ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.693514 containerd[1465]: 2025-09-13 00:24:53.647 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.693514 containerd[1465]: 2025-09-13 00:24:53.648 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3c03440a-ff3f-462d-ba46-2398b0c778c8", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea", Pod:"coredns-668d6bf9bc-bmgkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39a33e31324", MAC:"f2:f7:8a:31:7b:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:24:53.693514 containerd[1465]: 2025-09-13 00:24:53.681 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea" Namespace="kube-system" Pod="coredns-668d6bf9bc-bmgkp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:24:53.719014 kubelet[2498]: E0913 00:24:53.718977 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:53.745654 containerd[1465]: time="2025-09-13T00:24:53.745326264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:24:53.745654 containerd[1465]: time="2025-09-13T00:24:53.745541654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:24:53.745654 containerd[1465]: time="2025-09-13T00:24:53.745553015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:53.746579 containerd[1465]: time="2025-09-13T00:24:53.746391076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:24:53.771177 systemd[1]: Started cri-containerd-f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea.scope - libcontainer container f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea. Sep 13 00:24:53.834589 systemd[1]: run-netns-cni\x2d48fdce62\x2daf90\x2de82b\x2d1f2b\x2d9402c90fd8ba.mount: Deactivated successfully. Sep 13 00:24:53.870841 containerd[1465]: time="2025-09-13T00:24:53.870799867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bmgkp,Uid:3c03440a-ff3f-462d-ba46-2398b0c778c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea\"" Sep 13 00:24:53.872734 kubelet[2498]: E0913 00:24:53.872695 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:53.877767 containerd[1465]: time="2025-09-13T00:24:53.877457953Z" level=info msg="CreateContainer within sandbox \"f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:24:53.902580 containerd[1465]: time="2025-09-13T00:24:53.900695744Z" level=info msg="CreateContainer within sandbox \"f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b93c5667aa741ca84a76b4679ee02181b76239fa6bdfe3162050bc717fbdf810\"" Sep 13 00:24:53.910711 containerd[1465]: time="2025-09-13T00:24:53.910055762Z" level=info msg="StartContainer for \"b93c5667aa741ca84a76b4679ee02181b76239fa6bdfe3162050bc717fbdf810\"" Sep 13 00:24:54.000604 systemd[1]: Started cri-containerd-b93c5667aa741ca84a76b4679ee02181b76239fa6bdfe3162050bc717fbdf810.scope - libcontainer container b93c5667aa741ca84a76b4679ee02181b76239fa6bdfe3162050bc717fbdf810. Sep 13 00:24:54.023538 kubelet[2498]: I0913 00:24:54.021978 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:54.051124 systemd-networkd[1378]: vxlan.calico: Link UP Sep 13 00:24:54.051135 systemd-networkd[1378]: vxlan.calico: Gained carrier Sep 13 00:24:54.168771 containerd[1465]: time="2025-09-13T00:24:54.168542048Z" level=info msg="StartContainer for \"b93c5667aa741ca84a76b4679ee02181b76239fa6bdfe3162050bc717fbdf810\" returns successfully" Sep 13 00:24:54.632068 systemd-networkd[1378]: cali16fb5d985e3: Gained IPv6LL Sep 13 00:24:54.733316 kubelet[2498]: E0913 00:24:54.733274 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:54.760928 systemd-networkd[1378]: cali2b7aacc3df4: Gained IPv6LL Sep 13 00:24:54.769132 kubelet[2498]: I0913 00:24:54.769043 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bmgkp" podStartSLOduration=43.76901151 podStartE2EDuration="43.76901151s" podCreationTimestamp="2025-09-13 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:54.767392265 +0000 UTC m=+49.649461432" watchObservedRunningTime="2025-09-13 00:24:54.76901151 +0000 UTC m=+49.651080677" Sep 13 00:24:54.952624 systemd-networkd[1378]: cali39a33e31324: Gained IPv6LL Sep 13 00:24:54.988889 systemd[1]: run-containerd-runc-k8s.io-94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a-runc.A3VYnR.mount: Deactivated successfully. Sep 13 00:24:55.529494 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Sep 13 00:24:55.752783 kubelet[2498]: E0913 00:24:55.751209 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:56.272159 containerd[1465]: time="2025-09-13T00:24:56.272110695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:56.274024 containerd[1465]: time="2025-09-13T00:24:56.273953297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:24:56.274924 containerd[1465]: time="2025-09-13T00:24:56.274859726Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:56.277548 containerd[1465]: time="2025-09-13T00:24:56.277494600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:56.279261 containerd[1465]: time="2025-09-13T00:24:56.278889288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.327858935s" Sep 13 00:24:56.279261 containerd[1465]: time="2025-09-13T00:24:56.278970326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:24:56.280778 containerd[1465]: time="2025-09-13T00:24:56.280749587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:24:56.284972 containerd[1465]: time="2025-09-13T00:24:56.284735412Z" level=info msg="CreateContainer within sandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:24:56.304966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491899424.mount: Deactivated successfully. Sep 13 00:24:56.307567 containerd[1465]: time="2025-09-13T00:24:56.306914947Z" level=info msg="CreateContainer within sandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\"" Sep 13 00:24:56.308642 containerd[1465]: time="2025-09-13T00:24:56.308602882Z" level=info msg="StartContainer for \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\"" Sep 13 00:24:56.361912 systemd[1]: run-containerd-runc-k8s.io-ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b-runc.wdePam.mount: Deactivated successfully. Sep 13 00:24:56.371731 systemd[1]: Started cri-containerd-ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b.scope - libcontainer container ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b. Sep 13 00:24:56.444776 containerd[1465]: time="2025-09-13T00:24:56.444723399Z" level=info msg="StartContainer for \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\" returns successfully" Sep 13 00:24:56.761471 kubelet[2498]: E0913 00:24:56.759242 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:24:56.804725 kubelet[2498]: I0913 00:24:56.804662 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66fc9d466c-fpvl5" podStartSLOduration=28.52554884 podStartE2EDuration="34.804236619s" podCreationTimestamp="2025-09-13 00:24:22 +0000 UTC" firstStartedPulling="2025-09-13 00:24:50.001848722 +0000 UTC m=+44.883917868" lastFinishedPulling="2025-09-13 00:24:56.280536489 +0000 UTC m=+51.162605647" observedRunningTime="2025-09-13 00:24:56.803696372 +0000 UTC m=+51.685765539" watchObservedRunningTime="2025-09-13 00:24:56.804236619 +0000 UTC m=+51.686305780" Sep 13 00:24:57.763612 kubelet[2498]: I0913 00:24:57.761813 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:59.161929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088568631.mount: Deactivated successfully. Sep 13 00:24:59.236028 containerd[1465]: time="2025-09-13T00:24:59.235968044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:59.237060 containerd[1465]: time="2025-09-13T00:24:59.236991426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:24:59.237987 containerd[1465]: time="2025-09-13T00:24:59.237719702Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:59.240232 containerd[1465]: time="2025-09-13T00:24:59.240197913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:59.245351 containerd[1465]: time="2025-09-13T00:24:59.245017505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.964219724s" Sep 13 00:24:59.245351 containerd[1465]: time="2025-09-13T00:24:59.245063488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:24:59.247539 containerd[1465]: time="2025-09-13T00:24:59.247165514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:24:59.249774 containerd[1465]: time="2025-09-13T00:24:59.249744998Z" level=info msg="CreateContainer within sandbox \"93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:24:59.293377 containerd[1465]: time="2025-09-13T00:24:59.292791778Z" level=info msg="CreateContainer within sandbox \"93c7548a56d9c3af717d3a08d99cbab2ddde271dd1566908d9fb2a56fb224f3b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5c90a3c1932a798bd6e9fc8ee05461d6f35a6cdf173bad966e05c90cbdb81cd4\"" Sep 13 00:24:59.294907 containerd[1465]: time="2025-09-13T00:24:59.294851701Z" level=info msg="StartContainer for \"5c90a3c1932a798bd6e9fc8ee05461d6f35a6cdf173bad966e05c90cbdb81cd4\"" Sep 13 00:24:59.387694 systemd[1]: Started cri-containerd-5c90a3c1932a798bd6e9fc8ee05461d6f35a6cdf173bad966e05c90cbdb81cd4.scope - libcontainer container 5c90a3c1932a798bd6e9fc8ee05461d6f35a6cdf173bad966e05c90cbdb81cd4. Sep 13 00:24:59.440001 containerd[1465]: time="2025-09-13T00:24:59.439206872Z" level=info msg="StartContainer for \"5c90a3c1932a798bd6e9fc8ee05461d6f35a6cdf173bad966e05c90cbdb81cd4\" returns successfully" Sep 13 00:24:59.620740 containerd[1465]: time="2025-09-13T00:24:59.620646299Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:59.622111 containerd[1465]: time="2025-09-13T00:24:59.622051431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:24:59.625174 containerd[1465]: time="2025-09-13T00:24:59.625034519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 377.827875ms" Sep 13 00:24:59.625174 containerd[1465]: time="2025-09-13T00:24:59.625077739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:24:59.626578 containerd[1465]: time="2025-09-13T00:24:59.626297919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:24:59.630227 containerd[1465]: time="2025-09-13T00:24:59.630056518Z" level=info msg="CreateContainer within sandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:24:59.692155 containerd[1465]: time="2025-09-13T00:24:59.692039247Z" level=info msg="CreateContainer within sandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\"" Sep 13 00:24:59.695830 containerd[1465]: time="2025-09-13T00:24:59.695785335Z" level=info msg="StartContainer for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\"" Sep 13 00:24:59.746016 systemd[1]: Started cri-containerd-df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194.scope - libcontainer container df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194. Sep 13 00:24:59.815396 kubelet[2498]: I0913 00:24:59.814743 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54dcf47db7-fsd8q" podStartSLOduration=2.302791059 podStartE2EDuration="12.814719362s" podCreationTimestamp="2025-09-13 00:24:47 +0000 UTC" firstStartedPulling="2025-09-13 00:24:48.734217209 +0000 UTC m=+43.616286365" lastFinishedPulling="2025-09-13 00:24:59.246145509 +0000 UTC m=+54.128214668" observedRunningTime="2025-09-13 00:24:59.813816762 +0000 UTC m=+54.695885931" watchObservedRunningTime="2025-09-13 00:24:59.814719362 +0000 UTC m=+54.696788523" Sep 13 00:24:59.834153 containerd[1465]: time="2025-09-13T00:24:59.833716663Z" level=info msg="StartContainer for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" returns successfully" Sep 13 00:25:00.012035 containerd[1465]: time="2025-09-13T00:25:00.011845203Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:00.034001 containerd[1465]: time="2025-09-13T00:25:00.033904772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:25:00.044769 containerd[1465]: time="2025-09-13T00:25:00.044491468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 418.155543ms" Sep 13 00:25:00.044769 containerd[1465]: time="2025-09-13T00:25:00.044557033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:25:00.046587 containerd[1465]: time="2025-09-13T00:25:00.046545766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:25:00.053197 containerd[1465]: time="2025-09-13T00:25:00.053154516Z" level=info msg="CreateContainer within sandbox \"c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:25:00.076971 containerd[1465]: time="2025-09-13T00:25:00.076811114Z" level=info msg="CreateContainer within sandbox \"c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99a3383db1b79ec2e3adcc3f43d80722756910b42b2f2c9388471583e1cd528d\"" Sep 13 00:25:00.078856 containerd[1465]: time="2025-09-13T00:25:00.078465535Z" level=info msg="StartContainer for \"99a3383db1b79ec2e3adcc3f43d80722756910b42b2f2c9388471583e1cd528d\"" Sep 13 00:25:00.173736 systemd[1]: Started cri-containerd-99a3383db1b79ec2e3adcc3f43d80722756910b42b2f2c9388471583e1cd528d.scope - libcontainer container 99a3383db1b79ec2e3adcc3f43d80722756910b42b2f2c9388471583e1cd528d. Sep 13 00:25:00.245047 containerd[1465]: time="2025-09-13T00:25:00.244989595Z" level=info msg="StartContainer for \"99a3383db1b79ec2e3adcc3f43d80722756910b42b2f2c9388471583e1cd528d\" returns successfully" Sep 13 00:25:00.512119 systemd[1]: run-containerd-runc-k8s.io-df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194-runc.vdliNy.mount: Deactivated successfully. Sep 13 00:25:00.837580 kubelet[2498]: I0913 00:25:00.836937 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66fc9d466c-2qnrs" podStartSLOduration=30.235018816 podStartE2EDuration="38.836910986s" podCreationTimestamp="2025-09-13 00:24:22 +0000 UTC" firstStartedPulling="2025-09-13 00:24:51.024267142 +0000 UTC m=+45.906336291" lastFinishedPulling="2025-09-13 00:24:59.626159295 +0000 UTC m=+54.508228461" observedRunningTime="2025-09-13 00:25:00.836796738 +0000 UTC m=+55.718865906" watchObservedRunningTime="2025-09-13 00:25:00.836910986 +0000 UTC m=+55.718980155" Sep 13 00:25:01.845670 kubelet[2498]: I0913 00:25:01.842920 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:01.845670 kubelet[2498]: I0913 00:25:01.843665 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:03.765898 containerd[1465]: time="2025-09-13T00:25:03.765718683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:03.767787 containerd[1465]: time="2025-09-13T00:25:03.767689645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:25:03.769414 containerd[1465]: time="2025-09-13T00:25:03.769367767Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:03.780325 containerd[1465]: time="2025-09-13T00:25:03.778898508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:03.781372 containerd[1465]: time="2025-09-13T00:25:03.781314483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.734727996s" Sep 13 00:25:03.781372 containerd[1465]: time="2025-09-13T00:25:03.781373134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:25:03.784954 containerd[1465]: time="2025-09-13T00:25:03.784736815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:25:03.787784 containerd[1465]: time="2025-09-13T00:25:03.787727758Z" level=info msg="CreateContainer within sandbox \"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:25:03.814822 containerd[1465]: time="2025-09-13T00:25:03.814758767Z" level=info msg="CreateContainer within sandbox \"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4d1c15ffb2a0a35fb77cc30a68ca83f9ed518887776f0a8957c121ed18799c1b\"" Sep 13 00:25:03.816026 containerd[1465]: time="2025-09-13T00:25:03.815979655Z" level=info msg="StartContainer for \"4d1c15ffb2a0a35fb77cc30a68ca83f9ed518887776f0a8957c121ed18799c1b\"" Sep 13 00:25:03.885900 systemd[1]: Started cri-containerd-4d1c15ffb2a0a35fb77cc30a68ca83f9ed518887776f0a8957c121ed18799c1b.scope - libcontainer container 4d1c15ffb2a0a35fb77cc30a68ca83f9ed518887776f0a8957c121ed18799c1b. Sep 13 00:25:03.934551 containerd[1465]: time="2025-09-13T00:25:03.934267383Z" level=info msg="StartContainer for \"4d1c15ffb2a0a35fb77cc30a68ca83f9ed518887776f0a8957c121ed18799c1b\" returns successfully" Sep 13 00:25:04.838376 kubelet[2498]: I0913 00:25:04.838292 2498 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:25:04.842891 kubelet[2498]: I0913 00:25:04.842841 2498 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:25:04.953468 kubelet[2498]: I0913 00:25:04.952633 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d86d44bf-ff8mw" podStartSLOduration=33.811860055 podStartE2EDuration="41.952610259s" podCreationTimestamp="2025-09-13 00:24:23 +0000 UTC" firstStartedPulling="2025-09-13 00:24:51.905067033 +0000 UTC m=+46.787136191" lastFinishedPulling="2025-09-13 00:25:00.045817237 +0000 UTC m=+54.927886395" observedRunningTime="2025-09-13 00:25:00.86779546 +0000 UTC m=+55.749864629" watchObservedRunningTime="2025-09-13 00:25:04.952610259 +0000 UTC m=+59.834679426" Sep 13 00:25:04.953468 kubelet[2498]: I0913 00:25:04.953072 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gm62f" podStartSLOduration=24.996319253 podStartE2EDuration="38.953065948s" podCreationTimestamp="2025-09-13 00:24:26 +0000 UTC" firstStartedPulling="2025-09-13 00:24:49.825821362 +0000 UTC m=+44.707890520" lastFinishedPulling="2025-09-13 00:25:03.782568066 +0000 UTC m=+58.664637215" observedRunningTime="2025-09-13 00:25:04.950994311 +0000 UTC m=+59.833063478" watchObservedRunningTime="2025-09-13 00:25:04.953065948 +0000 UTC m=+59.835135114" Sep 13 00:25:05.387702 containerd[1465]: time="2025-09-13T00:25:05.387660265Z" level=info msg="StopPodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\"" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.566 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4a2b5f2c-0765-434f-910d-07d9f5ff57ab", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50", Pod:"goldmane-54d579b49d-nn2np", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16fb5d985e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.568 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.568 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" iface="eth0" netns="" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.568 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.568 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.686 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.688 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.688 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.699 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.699 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.702 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:05.709264 containerd[1465]: 2025-09-13 00:25:05.705 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.710395 containerd[1465]: time="2025-09-13T00:25:05.710182005Z" level=info msg="TearDown network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" successfully" Sep 13 00:25:05.710395 containerd[1465]: time="2025-09-13T00:25:05.710230765Z" level=info msg="StopPodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" returns successfully" Sep 13 00:25:05.726449 containerd[1465]: time="2025-09-13T00:25:05.726358314Z" level=info msg="RemovePodSandbox for \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\"" Sep 13 00:25:05.728668 containerd[1465]: time="2025-09-13T00:25:05.728614946Z" level=info msg="Forcibly stopping sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\"" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.820 [WARNING][5323] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4a2b5f2c-0765-434f-910d-07d9f5ff57ab", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50", Pod:"goldmane-54d579b49d-nn2np", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16fb5d985e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.821 [INFO][5323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.821 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" iface="eth0" netns="" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.821 [INFO][5323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.821 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.859 [INFO][5330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.859 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.859 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.869 [WARNING][5330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.869 [INFO][5330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" HandleID="k8s-pod-network.c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-goldmane--54d579b49d--nn2np-eth0" Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.872 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:05.878818 containerd[1465]: 2025-09-13 00:25:05.875 [INFO][5323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f" Sep 13 00:25:05.880460 containerd[1465]: time="2025-09-13T00:25:05.878867315Z" level=info msg="TearDown network for sandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" successfully" Sep 13 00:25:05.904010 containerd[1465]: time="2025-09-13T00:25:05.903950790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:05.928460 containerd[1465]: time="2025-09-13T00:25:05.928376938Z" level=info msg="RemovePodSandbox \"c8b3e0e1d35141d279e8cadcc37dc82a60b7f846a776f293ea3a27536612eb1f\" returns successfully" Sep 13 00:25:05.942742 containerd[1465]: time="2025-09-13T00:25:05.942600130Z" level=info msg="StopPodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\"" Sep 13 00:25:05.944316 kubelet[2498]: I0913 00:25:05.943751 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.165 [WARNING][5344] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3c03440a-ff3f-462d-ba46-2398b0c778c8", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea", Pod:"coredns-668d6bf9bc-bmgkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39a33e31324", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.166 [INFO][5344] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.166 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" iface="eth0" netns="" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.166 [INFO][5344] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.166 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.244 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.245 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.245 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.309 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.309 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.314 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:06.324499 containerd[1465]: 2025-09-13 00:25:06.319 [INFO][5344] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.326348 containerd[1465]: time="2025-09-13T00:25:06.324576486Z" level=info msg="TearDown network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" successfully" Sep 13 00:25:06.326348 containerd[1465]: time="2025-09-13T00:25:06.324616350Z" level=info msg="StopPodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" returns successfully" Sep 13 00:25:06.326634 containerd[1465]: time="2025-09-13T00:25:06.326598115Z" level=info msg="RemovePodSandbox for \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\"" Sep 13 00:25:06.326634 containerd[1465]: time="2025-09-13T00:25:06.326636863Z" level=info msg="Forcibly stopping sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\"" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.394 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3c03440a-ff3f-462d-ba46-2398b0c778c8", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"f3a0a72158171235c16659d0bc4d0e6df61cb14e1759f31c4dcb00a7f98aa1ea", Pod:"coredns-668d6bf9bc-bmgkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39a33e31324", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.395 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.395 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" iface="eth0" netns="" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.395 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.395 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.439 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.440 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.440 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.450 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.450 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" HandleID="k8s-pod-network.05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--bmgkp-eth0" Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.456 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:06.495309 containerd[1465]: 2025-09-13 00:25:06.472 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd" Sep 13 00:25:06.495309 containerd[1465]: time="2025-09-13T00:25:06.493169545Z" level=info msg="TearDown network for sandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" successfully" Sep 13 00:25:06.553313 containerd[1465]: time="2025-09-13T00:25:06.553259780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:06.553609 containerd[1465]: time="2025-09-13T00:25:06.553580703Z" level=info msg="RemovePodSandbox \"05b5ec1b52f11de0eb61e46d0605f1ec7d915895009e6d05e78d5678ee74fcbd\" returns successfully" Sep 13 00:25:06.582393 containerd[1465]: time="2025-09-13T00:25:06.582167977Z" level=info msg="StopPodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\"" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.777 [WARNING][5398] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0", GenerateName:"calico-kube-controllers-856fbd7bbd-", Namespace:"calico-system", SelfLink:"", UID:"fb440148-9fbd-4f08-a9ed-06e94ecc9e57", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856fbd7bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc", Pod:"calico-kube-controllers-856fbd7bbd-gmpj2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b7aacc3df4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.777 [INFO][5398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.777 [INFO][5398] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" iface="eth0" netns="" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.777 [INFO][5398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.777 [INFO][5398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.838 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.838 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.838 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.845 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.846 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.848 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:06.852837 containerd[1465]: 2025-09-13 00:25:06.850 [INFO][5398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.854006 containerd[1465]: time="2025-09-13T00:25:06.853693759Z" level=info msg="TearDown network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" successfully" Sep 13 00:25:06.854006 containerd[1465]: time="2025-09-13T00:25:06.853726124Z" level=info msg="StopPodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" returns successfully" Sep 13 00:25:06.854778 containerd[1465]: time="2025-09-13T00:25:06.854744471Z" level=info msg="RemovePodSandbox for \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\"" Sep 13 00:25:06.854890 containerd[1465]: time="2025-09-13T00:25:06.854788513Z" level=info msg="Forcibly stopping sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\"" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.914 [WARNING][5422] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0", GenerateName:"calico-kube-controllers-856fbd7bbd-", Namespace:"calico-system", SelfLink:"", UID:"fb440148-9fbd-4f08-a9ed-06e94ecc9e57", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856fbd7bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc", Pod:"calico-kube-controllers-856fbd7bbd-gmpj2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b7aacc3df4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.915 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.915 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" iface="eth0" netns="" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.915 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.915 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.960 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.960 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.960 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.969 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.969 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" HandleID="k8s-pod-network.0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--kube--controllers--856fbd7bbd--gmpj2-eth0" Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.972 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:06.977832 containerd[1465]: 2025-09-13 00:25:06.974 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1" Sep 13 00:25:06.977832 containerd[1465]: time="2025-09-13T00:25:06.977830299Z" level=info msg="TearDown network for sandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" successfully" Sep 13 00:25:06.980592 containerd[1465]: time="2025-09-13T00:25:06.980554348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:06.980668 containerd[1465]: time="2025-09-13T00:25:06.980632924Z" level=info msg="RemovePodSandbox \"0efdd4ce2d0043c8b654f5fd0c20ccaf2fe734ceb8a8b8818194280ffa45fcd1\" returns successfully" Sep 13 00:25:06.981543 containerd[1465]: time="2025-09-13T00:25:06.981400046Z" level=info msg="StopPodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\"" Sep 13 00:25:07.116240 systemd[1]: Started sshd@7-143.198.134.88:22-139.178.68.195:57088.service - OpenSSH per-connection server daemon (139.178.68.195:57088). Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.127 [WARNING][5444] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c14c5f57-0bd2-4e4c-bbc8-39406c393d42", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2", Pod:"calico-apiserver-66fc9d466c-2qnrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab305a2645b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.131 [INFO][5444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.132 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" iface="eth0" netns="" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.133 [INFO][5444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.133 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.197 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.197 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.198 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.214 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.214 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.217 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:07.222736 containerd[1465]: 2025-09-13 00:25:07.220 [INFO][5444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.229738 containerd[1465]: time="2025-09-13T00:25:07.222798525Z" level=info msg="TearDown network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" successfully" Sep 13 00:25:07.229738 containerd[1465]: time="2025-09-13T00:25:07.222823346Z" level=info msg="StopPodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" returns successfully" Sep 13 00:25:07.229738 containerd[1465]: time="2025-09-13T00:25:07.223688025Z" level=info msg="RemovePodSandbox for \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\"" Sep 13 00:25:07.229738 containerd[1465]: time="2025-09-13T00:25:07.223727205Z" level=info msg="Forcibly stopping sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\"" Sep 13 00:25:07.333621 sshd[5453]: Accepted publickey for core from 139.178.68.195 port 57088 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:07.336064 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:07.352486 systemd-logind[1449]: New session 8 of user core. Sep 13 00:25:07.358831 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.323 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c14c5f57-0bd2-4e4c-bbc8-39406c393d42", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2", Pod:"calico-apiserver-66fc9d466c-2qnrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab305a2645b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.324 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.324 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" iface="eth0" netns="" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.324 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.324 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.383 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.383 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.384 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.393 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.394 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" HandleID="k8s-pod-network.cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.396 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:07.401915 containerd[1465]: 2025-09-13 00:25:07.398 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236" Sep 13 00:25:07.403472 containerd[1465]: time="2025-09-13T00:25:07.401974530Z" level=info msg="TearDown network for sandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" successfully" Sep 13 00:25:07.408888 containerd[1465]: time="2025-09-13T00:25:07.408813705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:07.409171 containerd[1465]: time="2025-09-13T00:25:07.408902823Z" level=info msg="RemovePodSandbox \"cfd0a52ebf18b87b365ffa33941b9fce81f773b77b0bd03bf6ede291ab32f236\" returns successfully" Sep 13 00:25:07.410641 containerd[1465]: time="2025-09-13T00:25:07.410098948Z" level=info msg="StopPodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\"" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.482 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"88fd7908-d362-45b9-9c05-84c56d420f5b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790", Pod:"calico-apiserver-66fc9d466c-fpvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e5db3ddf93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.483 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.483 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" iface="eth0" netns="" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.483 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.483 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.530 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.530 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.530 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.540 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.540 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.543 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:07.550213 containerd[1465]: 2025-09-13 00:25:07.545 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.551419 containerd[1465]: time="2025-09-13T00:25:07.550593730Z" level=info msg="TearDown network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" successfully" Sep 13 00:25:07.551419 containerd[1465]: time="2025-09-13T00:25:07.550623454Z" level=info msg="StopPodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" returns successfully" Sep 13 00:25:07.553060 containerd[1465]: time="2025-09-13T00:25:07.552893568Z" level=info msg="RemovePodSandbox for \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\"" Sep 13 00:25:07.553748 containerd[1465]: time="2025-09-13T00:25:07.553628128Z" level=info msg="Forcibly stopping sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\"" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.671 [WARNING][5520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0", GenerateName:"calico-apiserver-66fc9d466c-", Namespace:"calico-apiserver", SelfLink:"", UID:"88fd7908-d362-45b9-9c05-84c56d420f5b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66fc9d466c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790", Pod:"calico-apiserver-66fc9d466c-fpvl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e5db3ddf93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.674 [INFO][5520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.674 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" iface="eth0" netns="" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.674 [INFO][5520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.674 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.760 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.763 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.763 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.778 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.778 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" HandleID="k8s-pod-network.c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.785 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:07.813475 containerd[1465]: 2025-09-13 00:25:07.797 [INFO][5520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8" Sep 13 00:25:07.813475 containerd[1465]: time="2025-09-13T00:25:07.812903851Z" level=info msg="TearDown network for sandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" successfully" Sep 13 00:25:07.826814 containerd[1465]: time="2025-09-13T00:25:07.825953065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:07.826814 containerd[1465]: time="2025-09-13T00:25:07.826057613Z" level=info msg="RemovePodSandbox \"c3c970fb7768695be38a5eb475cf758d4d3f1f44462925ff07a3f7b1ccf767f8\" returns successfully" Sep 13 00:25:07.829797 containerd[1465]: time="2025-09-13T00:25:07.828661601Z" level=info msg="StopPodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\"" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:07.938 [WARNING][5545] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a5a1c5a-3908-4e95-aa11-b97be572df2c", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b", Pod:"coredns-668d6bf9bc-6tkbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide42c69c7a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:07.940 [INFO][5545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:07.940 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" iface="eth0" netns="" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:07.940 [INFO][5545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:07.941 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.010 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.011 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.011 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.027 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.027 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.034 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:08.053747 containerd[1465]: 2025-09-13 00:25:08.045 [INFO][5545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.055427 containerd[1465]: time="2025-09-13T00:25:08.054799372Z" level=info msg="TearDown network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" successfully" Sep 13 00:25:08.055427 containerd[1465]: time="2025-09-13T00:25:08.054849675Z" level=info msg="StopPodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" returns successfully" Sep 13 00:25:08.056493 containerd[1465]: time="2025-09-13T00:25:08.055957675Z" level=info msg="RemovePodSandbox for \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\"" Sep 13 00:25:08.056493 containerd[1465]: time="2025-09-13T00:25:08.056002862Z" level=info msg="Forcibly stopping sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\"" Sep 13 00:25:08.331398 sshd[5453]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:08.339220 systemd[1]: sshd@7-143.198.134.88:22-139.178.68.195:57088.service: Deactivated successfully. Sep 13 00:25:08.364358 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:25:08.376878 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:25:08.381492 systemd-logind[1449]: Removed session 8. Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.217 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a5a1c5a-3908-4e95-aa11-b97be572df2c", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"6cc5a4a4083a31805adba6fdba18d9ce3d758b0db91349b9b9f20d56f881918b", Pod:"coredns-668d6bf9bc-6tkbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide42c69c7a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.219 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.219 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" iface="eth0" netns="" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.219 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.219 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.390 [INFO][5575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.390 [INFO][5575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.390 [INFO][5575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.412 [WARNING][5575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.412 [INFO][5575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" HandleID="k8s-pod-network.9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-coredns--668d6bf9bc--6tkbb-eth0" Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.423 [INFO][5575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:08.436645 containerd[1465]: 2025-09-13 00:25:08.429 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4" Sep 13 00:25:08.437760 containerd[1465]: time="2025-09-13T00:25:08.437156888Z" level=info msg="TearDown network for sandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" successfully" Sep 13 00:25:08.444144 containerd[1465]: time="2025-09-13T00:25:08.443990043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:08.444144 containerd[1465]: time="2025-09-13T00:25:08.444093011Z" level=info msg="RemovePodSandbox \"9df690c7b5b57e8ce04abec0a4e9668f22a22f83a18dc0aef821d6ba52d7e5c4\" returns successfully" Sep 13 00:25:08.445678 containerd[1465]: time="2025-09-13T00:25:08.445362922Z" level=info msg="StopPodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\"" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.547 [WARNING][5592] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.547 [INFO][5592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.547 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" iface="eth0" netns="" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.547 [INFO][5592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.547 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.598 [INFO][5600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.599 [INFO][5600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.599 [INFO][5600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.614 [WARNING][5600] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.615 [INFO][5600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.619 [INFO][5600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:08.628008 containerd[1465]: 2025-09-13 00:25:08.623 [INFO][5592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.629912 containerd[1465]: time="2025-09-13T00:25:08.628075764Z" level=info msg="TearDown network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" successfully" Sep 13 00:25:08.629912 containerd[1465]: time="2025-09-13T00:25:08.628113454Z" level=info msg="StopPodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" returns successfully" Sep 13 00:25:08.629912 containerd[1465]: time="2025-09-13T00:25:08.629217383Z" level=info msg="RemovePodSandbox for \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\"" Sep 13 00:25:08.629912 containerd[1465]: time="2025-09-13T00:25:08.629252276Z" level=info msg="Forcibly stopping sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\"" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.724 [WARNING][5615] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.724 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.725 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" iface="eth0" netns="" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.725 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.725 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.820 [INFO][5622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.823 [INFO][5622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.823 [INFO][5622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.840 [WARNING][5622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.840 [INFO][5622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" HandleID="k8s-pod-network.7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-whisker--69f4f6c884--bhqhf-eth0" Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.845 [INFO][5622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:08.861293 containerd[1465]: 2025-09-13 00:25:08.855 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd" Sep 13 00:25:08.861293 containerd[1465]: time="2025-09-13T00:25:08.860950738Z" level=info msg="TearDown network for sandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" successfully" Sep 13 00:25:08.865467 containerd[1465]: time="2025-09-13T00:25:08.865311787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:08.866182 containerd[1465]: time="2025-09-13T00:25:08.866095710Z" level=info msg="RemovePodSandbox \"7a339d517f99aaac372e95e4b476ab19563e0e79ae86ff3be29fe865cceed7cd\" returns successfully" Sep 13 00:25:08.867286 containerd[1465]: time="2025-09-13T00:25:08.866749002Z" level=info msg="StopPodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\"" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:08.957 [WARNING][5636] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a1f681a-96b5-4792-936c-830bdc4fc67f", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79", Pod:"csi-node-driver-gm62f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidecdea08d61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:08.958 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:08.958 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" iface="eth0" netns="" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:08.958 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:08.958 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.033 [INFO][5643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.034 [INFO][5643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.034 [INFO][5643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.045 [WARNING][5643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.045 [INFO][5643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.048 [INFO][5643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:09.057982 containerd[1465]: 2025-09-13 00:25:09.054 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.061259 containerd[1465]: time="2025-09-13T00:25:09.059987670Z" level=info msg="TearDown network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" successfully" Sep 13 00:25:09.061259 containerd[1465]: time="2025-09-13T00:25:09.060044286Z" level=info msg="StopPodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" returns successfully" Sep 13 00:25:09.061259 containerd[1465]: time="2025-09-13T00:25:09.060739407Z" level=info msg="RemovePodSandbox for \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\"" Sep 13 00:25:09.061259 containerd[1465]: time="2025-09-13T00:25:09.060775202Z" level=info msg="Forcibly stopping sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\"" Sep 13 00:25:09.134398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962629502.mount: Deactivated successfully. Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.143 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a1f681a-96b5-4792-936c-830bdc4fc67f", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"8deac60a054c253f1b3a31fb2aa829041cc2b3ca527e27f2ae4261176c164a79", Pod:"csi-node-driver-gm62f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidecdea08d61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.144 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.145 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" iface="eth0" netns="" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.145 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.145 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.240 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.242 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.242 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.265 [WARNING][5665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.265 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" HandleID="k8s-pod-network.e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-csi--node--driver--gm62f-eth0" Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.271 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:09.289469 containerd[1465]: 2025-09-13 00:25:09.275 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a" Sep 13 00:25:09.289469 containerd[1465]: time="2025-09-13T00:25:09.288687450Z" level=info msg="TearDown network for sandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" successfully" Sep 13 00:25:09.296457 containerd[1465]: time="2025-09-13T00:25:09.296370312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:09.297121 containerd[1465]: time="2025-09-13T00:25:09.296990277Z" level=info msg="RemovePodSandbox \"e1b9b87829598247999c69b1ce68915550d94dd655fbada067d083d98f99a01a\" returns successfully" Sep 13 00:25:09.335715 containerd[1465]: time="2025-09-13T00:25:09.334595074Z" level=info msg="StopPodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\"" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.490 [WARNING][5684] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0", GenerateName:"calico-apiserver-d86d44bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eedd9846-66f8-4fbc-912d-f953222ec80b", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d86d44bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198", Pod:"calico-apiserver-d86d44bf-ff8mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali398d550cbde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.491 [INFO][5684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.491 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" iface="eth0" netns="" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.491 [INFO][5684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.491 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.540 [INFO][5691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.541 [INFO][5691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.541 [INFO][5691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.550 [WARNING][5691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.550 [INFO][5691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.552 [INFO][5691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:09.558456 containerd[1465]: 2025-09-13 00:25:09.554 [INFO][5684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.559425 containerd[1465]: time="2025-09-13T00:25:09.559004449Z" level=info msg="TearDown network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" successfully" Sep 13 00:25:09.559425 containerd[1465]: time="2025-09-13T00:25:09.559036557Z" level=info msg="StopPodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" returns successfully" Sep 13 00:25:09.569194 containerd[1465]: time="2025-09-13T00:25:09.569152461Z" level=info msg="RemovePodSandbox for \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\"" Sep 13 00:25:09.569194 containerd[1465]: time="2025-09-13T00:25:09.569194544Z" level=info msg="Forcibly stopping sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\"" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.643 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0", GenerateName:"calico-apiserver-d86d44bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"eedd9846-66f8-4fbc-912d-f953222ec80b", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d86d44bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"c71635c47fe95b5ec8621c1a6deae6b8edaacfea07bec25691871f66f5359198", Pod:"calico-apiserver-d86d44bf-ff8mw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali398d550cbde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.643 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.643 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" iface="eth0" netns="" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.643 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.643 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.695 [INFO][5712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.695 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.695 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.710 [WARNING][5712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.710 [INFO][5712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" HandleID="k8s-pod-network.7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--ff8mw-eth0" Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.715 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:09.730739 containerd[1465]: 2025-09-13 00:25:09.722 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21" Sep 13 00:25:09.733047 containerd[1465]: time="2025-09-13T00:25:09.731837446Z" level=info msg="TearDown network for sandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" successfully" Sep 13 00:25:09.740779 containerd[1465]: time="2025-09-13T00:25:09.740535947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:25:09.740779 containerd[1465]: time="2025-09-13T00:25:09.740654979Z" level=info msg="RemovePodSandbox \"7baab28ed7ab7118b2ad2f4797fa76723397bedfcd8ab54835bbcdc19330bf21\" returns successfully" Sep 13 00:25:10.679168 containerd[1465]: time="2025-09-13T00:25:10.677493565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:25:10.693179 containerd[1465]: time="2025-09-13T00:25:10.693125820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:10.709786 containerd[1465]: time="2025-09-13T00:25:10.709739407Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:10.712573 containerd[1465]: time="2025-09-13T00:25:10.712525183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:10.713812 containerd[1465]: time="2025-09-13T00:25:10.713756576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.928976175s" Sep 13 00:25:10.713977 containerd[1465]: time="2025-09-13T00:25:10.713822531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:25:10.816541 containerd[1465]: time="2025-09-13T00:25:10.816481081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:25:11.037583 containerd[1465]: time="2025-09-13T00:25:11.036888320Z" level=info msg="CreateContainer within sandbox \"fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:25:11.148794 containerd[1465]: time="2025-09-13T00:25:11.148603630Z" level=info msg="CreateContainer within sandbox \"fc27081622a0d60d2dc5e14a3aef9d03a17f65f7a6db427707aab6e3c122db50\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104\"" Sep 13 00:25:11.153364 containerd[1465]: time="2025-09-13T00:25:11.153258950Z" level=info msg="StartContainer for \"baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104\"" Sep 13 00:25:11.690659 systemd[1]: Started cri-containerd-baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104.scope - libcontainer container baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104. Sep 13 00:25:12.036770 containerd[1465]: time="2025-09-13T00:25:12.036620765Z" level=info msg="StartContainer for \"baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104\" returns successfully" Sep 13 00:25:12.939913 kubelet[2498]: I0913 00:25:12.936565 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-nn2np" podStartSLOduration=29.141339258 podStartE2EDuration="46.917209934s" podCreationTimestamp="2025-09-13 00:24:26 +0000 UTC" firstStartedPulling="2025-09-13 00:24:53.01111494 +0000 UTC m=+47.893184099" lastFinishedPulling="2025-09-13 00:25:10.786985613 +0000 UTC m=+65.669054775" observedRunningTime="2025-09-13 00:25:12.8373921 +0000 UTC m=+67.719461267" watchObservedRunningTime="2025-09-13 00:25:12.917209934 +0000 UTC m=+67.799279100" Sep 13 00:25:13.436377 systemd[1]: Started sshd@8-143.198.134.88:22-139.178.68.195:38652.service - OpenSSH per-connection server daemon (139.178.68.195:38652). Sep 13 00:25:13.612873 sshd[5771]: Accepted publickey for core from 139.178.68.195 port 38652 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:13.614558 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:13.635678 systemd-logind[1449]: New session 9 of user core. Sep 13 00:25:13.640812 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:25:14.920720 sshd[5771]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:14.930613 systemd[1]: sshd@8-143.198.134.88:22-139.178.68.195:38652.service: Deactivated successfully. Sep 13 00:25:14.934874 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:25:14.941772 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:25:14.948102 systemd-logind[1449]: Removed session 9. Sep 13 00:25:15.095101 systemd[1]: run-containerd-runc-k8s.io-baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104-runc.L05aJ3.mount: Deactivated successfully. Sep 13 00:25:15.102556 containerd[1465]: time="2025-09-13T00:25:15.102497694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:15.129387 containerd[1465]: time="2025-09-13T00:25:15.103885182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:25:15.130399 containerd[1465]: time="2025-09-13T00:25:15.130140986Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:15.164022 containerd[1465]: time="2025-09-13T00:25:15.163975531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:15.167275 containerd[1465]: time="2025-09-13T00:25:15.165527885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.348981215s" Sep 13 00:25:15.167275 containerd[1465]: time="2025-09-13T00:25:15.165570654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:25:15.339208 kubelet[2498]: I0913 00:25:15.339067 2498 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:15.353543 containerd[1465]: time="2025-09-13T00:25:15.353476853Z" level=info msg="CreateContainer within sandbox \"6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:25:15.459975 containerd[1465]: time="2025-09-13T00:25:15.459850783Z" level=info msg="CreateContainer within sandbox \"6f1b3b55472d8774fd5cc26a0f53dc40f7c1d7207eeab6b9d39d2b965a48e0dc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f35463d933f1e5115f01e66358de88223bb245ff224dd8a0442247c964f4640c\"" Sep 13 00:25:15.486296 containerd[1465]: time="2025-09-13T00:25:15.486259118Z" level=info msg="StartContainer for \"f35463d933f1e5115f01e66358de88223bb245ff224dd8a0442247c964f4640c\"" Sep 13 00:25:15.706733 systemd[1]: Started cri-containerd-f35463d933f1e5115f01e66358de88223bb245ff224dd8a0442247c964f4640c.scope - libcontainer container f35463d933f1e5115f01e66358de88223bb245ff224dd8a0442247c964f4640c. Sep 13 00:25:15.790456 containerd[1465]: time="2025-09-13T00:25:15.790200365Z" level=info msg="StartContainer for \"f35463d933f1e5115f01e66358de88223bb245ff224dd8a0442247c964f4640c\" returns successfully" Sep 13 00:25:16.056852 kubelet[2498]: I0913 00:25:16.056679 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-856fbd7bbd-gmpj2" podStartSLOduration=27.949665816 podStartE2EDuration="50.056652139s" podCreationTimestamp="2025-09-13 00:24:26 +0000 UTC" firstStartedPulling="2025-09-13 00:24:53.093116668 +0000 UTC m=+47.975185813" lastFinishedPulling="2025-09-13 00:25:15.200102991 +0000 UTC m=+70.082172136" observedRunningTime="2025-09-13 00:25:16.056477979 +0000 UTC m=+70.938547147" watchObservedRunningTime="2025-09-13 00:25:16.056652139 +0000 UTC m=+70.938721305" Sep 13 00:25:16.097342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357070374.mount: Deactivated successfully. Sep 13 00:25:17.452900 containerd[1465]: time="2025-09-13T00:25:17.452562853Z" level=info msg="StopContainer for \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\" with timeout 30 (s)" Sep 13 00:25:17.453720 containerd[1465]: time="2025-09-13T00:25:17.453631402Z" level=info msg="Stop container \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\" with signal terminated" Sep 13 00:25:17.485629 systemd[1]: cri-containerd-ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b.scope: Deactivated successfully. Sep 13 00:25:17.538314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b-rootfs.mount: Deactivated successfully. Sep 13 00:25:17.577516 systemd[1]: Created slice kubepods-besteffort-podf6c8ed14_df3f_4560_bf0d_fb2f765e64c3.slice - libcontainer container kubepods-besteffort-podf6c8ed14_df3f_4560_bf0d_fb2f765e64c3.slice. Sep 13 00:25:17.582939 kubelet[2498]: I0913 00:25:17.581560 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxzh\" (UniqueName: \"kubernetes.io/projected/f6c8ed14-df3f-4560-bf0d-fb2f765e64c3-kube-api-access-xcxzh\") pod \"calico-apiserver-d86d44bf-8lstp\" (UID: \"f6c8ed14-df3f-4560-bf0d-fb2f765e64c3\") " pod="calico-apiserver/calico-apiserver-d86d44bf-8lstp" Sep 13 00:25:17.582939 kubelet[2498]: I0913 00:25:17.581681 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6c8ed14-df3f-4560-bf0d-fb2f765e64c3-calico-apiserver-certs\") pod \"calico-apiserver-d86d44bf-8lstp\" (UID: \"f6c8ed14-df3f-4560-bf0d-fb2f765e64c3\") " pod="calico-apiserver/calico-apiserver-d86d44bf-8lstp" Sep 13 00:25:17.607984 containerd[1465]: time="2025-09-13T00:25:17.554781401Z" level=info msg="shim disconnected" id=ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b namespace=k8s.io Sep 13 00:25:17.608255 containerd[1465]: time="2025-09-13T00:25:17.608225616Z" level=warning msg="cleaning up after shim disconnected" id=ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b namespace=k8s.io Sep 13 00:25:17.608318 containerd[1465]: time="2025-09-13T00:25:17.608306664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:25:17.643656 containerd[1465]: time="2025-09-13T00:25:17.643567017Z" level=info msg="StopContainer for \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\" returns successfully" Sep 13 00:25:17.662576 containerd[1465]: time="2025-09-13T00:25:17.662510536Z" level=info msg="StopPodSandbox for \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\"" Sep 13 00:25:17.662576 containerd[1465]: time="2025-09-13T00:25:17.662582796Z" level=info msg="Container to stop \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:25:17.668162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790-shm.mount: Deactivated successfully. Sep 13 00:25:17.678574 systemd[1]: cri-containerd-43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790.scope: Deactivated successfully. Sep 13 00:25:17.731465 containerd[1465]: time="2025-09-13T00:25:17.729952837Z" level=info msg="shim disconnected" id=43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790 namespace=k8s.io Sep 13 00:25:17.731465 containerd[1465]: time="2025-09-13T00:25:17.730006224Z" level=warning msg="cleaning up after shim disconnected" id=43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790 namespace=k8s.io Sep 13 00:25:17.731465 containerd[1465]: time="2025-09-13T00:25:17.730014825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:25:17.734456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790-rootfs.mount: Deactivated successfully. Sep 13 00:25:17.759961 containerd[1465]: time="2025-09-13T00:25:17.759910593Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:25:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:25:17.886459 containerd[1465]: time="2025-09-13T00:25:17.886389052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d86d44bf-8lstp,Uid:f6c8ed14-df3f-4560-bf0d-fb2f765e64c3,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:25:18.016082 kubelet[2498]: I0913 00:25:18.015497 2498 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:25:18.068235 systemd-networkd[1378]: cali9e5db3ddf93: Link DOWN Sep 13 00:25:18.068258 systemd-networkd[1378]: cali9e5db3ddf93: Lost carrier Sep 13 00:25:18.397877 systemd-networkd[1378]: cali55427dff653: Link UP Sep 13 00:25:18.399509 systemd-networkd[1378]: cali55427dff653: Gained carrier Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.018 [INFO][5988] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0 calico-apiserver-d86d44bf- calico-apiserver f6c8ed14-df3f-4560-bf0d-fb2f765e64c3 1233 0 2025-09-13 00:25:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d86d44bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-9b8e9ee716 calico-apiserver-d86d44bf-8lstp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali55427dff653 [] [] }} ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.021 [INFO][5988] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.294 [INFO][6001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" HandleID="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.297 [INFO][6001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" HandleID="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-9b8e9ee716", "pod":"calico-apiserver-d86d44bf-8lstp", "timestamp":"2025-09-13 00:25:18.294451912 +0000 UTC"}, Hostname:"ci-4081.3.5-n-9b8e9ee716", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.297 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.298 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.298 [INFO][6001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-9b8e9ee716' Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.315 [INFO][6001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.329 [INFO][6001] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.340 [INFO][6001] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.343 [INFO][6001] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.347 [INFO][6001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.347 [INFO][6001] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.350 [INFO][6001] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.356 [INFO][6001] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.370 [INFO][6001] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.31.138/26] block=192.168.31.128/26 handle="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.370 [INFO][6001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.138/26] handle="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" host="ci-4081.3.5-n-9b8e9ee716" Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.370 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:18.441037 containerd[1465]: 2025-09-13 00:25:18.370 [INFO][6001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.138/26] IPv6=[] ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" HandleID="k8s-pod-network.b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.446030 containerd[1465]: 2025-09-13 00:25:18.379 [INFO][5988] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0", GenerateName:"calico-apiserver-d86d44bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6c8ed14-df3f-4560-bf0d-fb2f765e64c3", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d86d44bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"", Pod:"calico-apiserver-d86d44bf-8lstp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55427dff653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:18.446030 containerd[1465]: 2025-09-13 00:25:18.380 [INFO][5988] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.138/32] ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.446030 containerd[1465]: 2025-09-13 00:25:18.380 [INFO][5988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55427dff653 ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.446030 containerd[1465]: 2025-09-13 00:25:18.401 [INFO][5988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.446030 containerd[1465]: 2025-09-13 00:25:18.404 [INFO][5988] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0", GenerateName:"calico-apiserver-d86d44bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6c8ed14-df3f-4560-bf0d-fb2f765e64c3", ResourceVersion:"1233", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d86d44bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-9b8e9ee716", ContainerID:"b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd", Pod:"calico-apiserver-d86d44bf-8lstp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali55427dff653", MAC:"7a:1b:95:d8:b5:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:18.446030 containerd[1465]: 2025-09-13 00:25:18.432 [INFO][5988] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd" Namespace="calico-apiserver" Pod="calico-apiserver-d86d44bf-8lstp" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--d86d44bf--8lstp-eth0" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.032 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.032 [INFO][5983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" iface="eth0" netns="/var/run/netns/cni-e131eb9e-2b2c-7dce-cae6-3d71a1fe207c" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.034 [INFO][5983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" iface="eth0" netns="/var/run/netns/cni-e131eb9e-2b2c-7dce-cae6-3d71a1fe207c" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.067 [INFO][5983] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" after=34.464089ms iface="eth0" netns="/var/run/netns/cni-e131eb9e-2b2c-7dce-cae6-3d71a1fe207c" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.067 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.067 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.300 [INFO][6005] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.301 [INFO][6005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.370 [INFO][6005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.471 [INFO][6005] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.471 [INFO][6005] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.475 [INFO][6005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:18.495748 containerd[1465]: 2025-09-13 00:25:18.483 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:25:18.498690 containerd[1465]: time="2025-09-13T00:25:18.497569286Z" level=info msg="TearDown network for sandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" successfully" Sep 13 00:25:18.498690 containerd[1465]: time="2025-09-13T00:25:18.497610580Z" level=info msg="StopPodSandbox for \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" returns successfully" Sep 13 00:25:18.517532 containerd[1465]: time="2025-09-13T00:25:18.514570369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:25:18.517532 containerd[1465]: time="2025-09-13T00:25:18.514671330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:25:18.517532 containerd[1465]: time="2025-09-13T00:25:18.514691668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:18.517532 containerd[1465]: time="2025-09-13T00:25:18.514900110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:18.546864 systemd[1]: run-netns-cni\x2de131eb9e\x2d2b2c\x2d7dce\x2dcae6\x2d3d71a1fe207c.mount: Deactivated successfully. Sep 13 00:25:18.568357 systemd[1]: Started cri-containerd-b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd.scope - libcontainer container b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd. Sep 13 00:25:18.646329 containerd[1465]: time="2025-09-13T00:25:18.646199408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d86d44bf-8lstp,Uid:f6c8ed14-df3f-4560-bf0d-fb2f765e64c3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd\"" Sep 13 00:25:18.651616 containerd[1465]: time="2025-09-13T00:25:18.651413730Z" level=info msg="CreateContainer within sandbox \"b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:25:18.669891 containerd[1465]: time="2025-09-13T00:25:18.669806397Z" level=info msg="CreateContainer within sandbox \"b1d63d934dea74cc8ec06862863fe144166c66fb4199a0774aaff0f9decea5bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1b999eda95ae33125f1bc1a30310234ab787512879e783839d211c25282f8493\"" Sep 13 00:25:18.674586 containerd[1465]: time="2025-09-13T00:25:18.671743883Z" level=info msg="StartContainer for \"1b999eda95ae33125f1bc1a30310234ab787512879e783839d211c25282f8493\"" Sep 13 00:25:18.696563 kubelet[2498]: I0913 00:25:18.695995 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9g9w\" (UniqueName: \"kubernetes.io/projected/88fd7908-d362-45b9-9c05-84c56d420f5b-kube-api-access-q9g9w\") pod \"88fd7908-d362-45b9-9c05-84c56d420f5b\" (UID: \"88fd7908-d362-45b9-9c05-84c56d420f5b\") " Sep 13 00:25:18.696563 kubelet[2498]: I0913 00:25:18.696141 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/88fd7908-d362-45b9-9c05-84c56d420f5b-calico-apiserver-certs\") pod \"88fd7908-d362-45b9-9c05-84c56d420f5b\" (UID: \"88fd7908-d362-45b9-9c05-84c56d420f5b\") " Sep 13 00:25:18.729557 kubelet[2498]: I0913 00:25:18.719625 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fd7908-d362-45b9-9c05-84c56d420f5b-kube-api-access-q9g9w" (OuterVolumeSpecName: "kube-api-access-q9g9w") pod "88fd7908-d362-45b9-9c05-84c56d420f5b" (UID: "88fd7908-d362-45b9-9c05-84c56d420f5b"). InnerVolumeSpecName "kube-api-access-q9g9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:25:18.739056 kubelet[2498]: I0913 00:25:18.738985 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fd7908-d362-45b9-9c05-84c56d420f5b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "88fd7908-d362-45b9-9c05-84c56d420f5b" (UID: "88fd7908-d362-45b9-9c05-84c56d420f5b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:25:18.754999 systemd[1]: Started cri-containerd-1b999eda95ae33125f1bc1a30310234ab787512879e783839d211c25282f8493.scope - libcontainer container 1b999eda95ae33125f1bc1a30310234ab787512879e783839d211c25282f8493. Sep 13 00:25:18.797620 kubelet[2498]: I0913 00:25:18.797572 2498 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9g9w\" (UniqueName: \"kubernetes.io/projected/88fd7908-d362-45b9-9c05-84c56d420f5b-kube-api-access-q9g9w\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:25:18.797620 kubelet[2498]: I0913 00:25:18.797605 2498 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/88fd7908-d362-45b9-9c05-84c56d420f5b-calico-apiserver-certs\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:25:18.830642 containerd[1465]: time="2025-09-13T00:25:18.830593771Z" level=info msg="StartContainer for \"1b999eda95ae33125f1bc1a30310234ab787512879e783839d211c25282f8493\" returns successfully" Sep 13 00:25:19.036592 systemd[1]: Removed slice kubepods-besteffort-pod88fd7908_d362_45b9_9c05_84c56d420f5b.slice - libcontainer container kubepods-besteffort-pod88fd7908_d362_45b9_9c05_84c56d420f5b.slice. Sep 13 00:25:19.066178 kubelet[2498]: I0913 00:25:19.065344 2498 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d86d44bf-8lstp" podStartSLOduration=2.065316445 podStartE2EDuration="2.065316445s" podCreationTimestamp="2025-09-13 00:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:19.063417022 +0000 UTC m=+73.945486190" watchObservedRunningTime="2025-09-13 00:25:19.065316445 +0000 UTC m=+73.947385611" Sep 13 00:25:19.262865 kubelet[2498]: I0913 00:25:19.262657 2498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fd7908-d362-45b9-9c05-84c56d420f5b" path="/var/lib/kubelet/pods/88fd7908-d362-45b9-9c05-84c56d420f5b/volumes" Sep 13 00:25:19.528131 systemd-networkd[1378]: cali55427dff653: Gained IPv6LL Sep 13 00:25:19.538357 systemd[1]: var-lib-kubelet-pods-88fd7908\x2dd362\x2d45b9\x2d9c05\x2d84c56d420f5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq9g9w.mount: Deactivated successfully. Sep 13 00:25:19.538508 systemd[1]: var-lib-kubelet-pods-88fd7908\x2dd362\x2d45b9\x2d9c05\x2d84c56d420f5b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:25:19.944187 systemd[1]: Started sshd@9-143.198.134.88:22-139.178.68.195:37862.service - OpenSSH per-connection server daemon (139.178.68.195:37862). Sep 13 00:25:20.109186 sshd[6120]: Accepted publickey for core from 139.178.68.195 port 37862 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:20.112673 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:20.123747 systemd-logind[1449]: New session 10 of user core. Sep 13 00:25:20.129783 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:25:20.989621 sshd[6120]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:21.003103 systemd[1]: sshd@9-143.198.134.88:22-139.178.68.195:37862.service: Deactivated successfully. Sep 13 00:25:21.007066 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:25:21.010278 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:25:21.022097 systemd[1]: Started sshd@10-143.198.134.88:22-139.178.68.195:37878.service - OpenSSH per-connection server daemon (139.178.68.195:37878). Sep 13 00:25:21.024739 systemd-logind[1449]: Removed session 10. Sep 13 00:25:21.096732 sshd[6135]: Accepted publickey for core from 139.178.68.195 port 37878 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:21.099969 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:21.107759 systemd-logind[1449]: New session 11 of user core. Sep 13 00:25:21.113767 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:25:21.263251 containerd[1465]: time="2025-09-13T00:25:21.262707171Z" level=info msg="StopContainer for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" with timeout 30 (s)" Sep 13 00:25:21.269068 containerd[1465]: time="2025-09-13T00:25:21.268916445Z" level=info msg="Stop container \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" with signal terminated" Sep 13 00:25:21.374506 systemd[1]: cri-containerd-df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194.scope: Deactivated successfully. Sep 13 00:25:21.375268 systemd[1]: cri-containerd-df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194.scope: Consumed 1.556s CPU time. Sep 13 00:25:21.459165 containerd[1465]: time="2025-09-13T00:25:21.457614803Z" level=info msg="shim disconnected" id=df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194 namespace=k8s.io Sep 13 00:25:21.459165 containerd[1465]: time="2025-09-13T00:25:21.457680999Z" level=warning msg="cleaning up after shim disconnected" id=df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194 namespace=k8s.io Sep 13 00:25:21.459165 containerd[1465]: time="2025-09-13T00:25:21.457692216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:25:21.460485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194-rootfs.mount: Deactivated successfully. Sep 13 00:25:21.548625 containerd[1465]: time="2025-09-13T00:25:21.544825988Z" level=info msg="StopContainer for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" returns successfully" Sep 13 00:25:21.548625 containerd[1465]: time="2025-09-13T00:25:21.546033719Z" level=info msg="StopPodSandbox for \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\"" Sep 13 00:25:21.548625 containerd[1465]: time="2025-09-13T00:25:21.546095123Z" level=info msg="Container to stop \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:25:21.554311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2-shm.mount: Deactivated successfully. Sep 13 00:25:21.572637 systemd[1]: cri-containerd-7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2.scope: Deactivated successfully. Sep 13 00:25:21.590548 sshd[6135]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:21.604555 systemd[1]: sshd@10-143.198.134.88:22-139.178.68.195:37878.service: Deactivated successfully. Sep 13 00:25:21.613691 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:25:21.630955 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:25:21.632174 systemd[1]: Started sshd@11-143.198.134.88:22-139.178.68.195:37892.service - OpenSSH per-connection server daemon (139.178.68.195:37892). Sep 13 00:25:21.647029 systemd-logind[1449]: Removed session 11. Sep 13 00:25:21.709950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2-rootfs.mount: Deactivated successfully. Sep 13 00:25:21.715456 containerd[1465]: time="2025-09-13T00:25:21.714193552Z" level=info msg="shim disconnected" id=7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2 namespace=k8s.io Sep 13 00:25:21.715456 containerd[1465]: time="2025-09-13T00:25:21.714271104Z" level=warning msg="cleaning up after shim disconnected" id=7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2 namespace=k8s.io Sep 13 00:25:21.715456 containerd[1465]: time="2025-09-13T00:25:21.714284043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:25:21.764935 sshd[6196]: Accepted publickey for core from 139.178.68.195 port 37892 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:21.768467 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:21.778772 systemd-logind[1449]: New session 12 of user core. Sep 13 00:25:21.785111 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:25:21.840016 systemd-networkd[1378]: caliab305a2645b: Link DOWN Sep 13 00:25:21.840025 systemd-networkd[1378]: caliab305a2645b: Lost carrier Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.836 [INFO][6225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.836 [INFO][6225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" iface="eth0" netns="/var/run/netns/cni-5ec70e50-3aca-2669-0469-9c39e5fd74b9" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.837 [INFO][6225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" iface="eth0" netns="/var/run/netns/cni-5ec70e50-3aca-2669-0469-9c39e5fd74b9" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.852 [INFO][6225] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" after=15.939138ms iface="eth0" netns="/var/run/netns/cni-5ec70e50-3aca-2669-0469-9c39e5fd74b9" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.852 [INFO][6225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.852 [INFO][6225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.901 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.901 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.901 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.956 [INFO][6238] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.956 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.960 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:21.968370 containerd[1465]: 2025-09-13 00:25:21.963 [INFO][6225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:25:21.973900 containerd[1465]: time="2025-09-13T00:25:21.972566078Z" level=info msg="TearDown network for sandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" successfully" Sep 13 00:25:21.973900 containerd[1465]: time="2025-09-13T00:25:21.972609406Z" level=info msg="StopPodSandbox for \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" returns successfully" Sep 13 00:25:21.978221 systemd[1]: run-netns-cni\x2d5ec70e50\x2d3aca\x2d2669\x2d0469\x2d9c39e5fd74b9.mount: Deactivated successfully. Sep 13 00:25:21.989741 sshd[6196]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:21.996426 systemd[1]: sshd@11-143.198.134.88:22-139.178.68.195:37892.service: Deactivated successfully. Sep 13 00:25:22.004150 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:25:22.006952 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:25:22.008517 systemd-logind[1449]: Removed session 12. Sep 13 00:25:22.027045 kubelet[2498]: I0913 00:25:22.026990 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-calico-apiserver-certs\") pod \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\" (UID: \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\") " Sep 13 00:25:22.027045 kubelet[2498]: I0913 00:25:22.027039 2498 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mds4x\" (UniqueName: \"kubernetes.io/projected/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-kube-api-access-mds4x\") pod \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\" (UID: \"c14c5f57-0bd2-4e4c-bbc8-39406c393d42\") " Sep 13 00:25:22.039141 systemd[1]: var-lib-kubelet-pods-c14c5f57\x2d0bd2\x2d4e4c\x2dbbc8\x2d39406c393d42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmds4x.mount: Deactivated successfully. Sep 13 00:25:22.041734 kubelet[2498]: I0913 00:25:22.039873 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-kube-api-access-mds4x" (OuterVolumeSpecName: "kube-api-access-mds4x") pod "c14c5f57-0bd2-4e4c-bbc8-39406c393d42" (UID: "c14c5f57-0bd2-4e4c-bbc8-39406c393d42"). InnerVolumeSpecName "kube-api-access-mds4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:25:22.042327 kubelet[2498]: I0913 00:25:22.042257 2498 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c14c5f57-0bd2-4e4c-bbc8-39406c393d42" (UID: "c14c5f57-0bd2-4e4c-bbc8-39406c393d42"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:25:22.100015 systemd[1]: Removed slice kubepods-besteffort-podc14c5f57_0bd2_4e4c_bbc8_39406c393d42.slice - libcontainer container kubepods-besteffort-podc14c5f57_0bd2_4e4c_bbc8_39406c393d42.slice. Sep 13 00:25:22.101277 systemd[1]: kubepods-besteffort-podc14c5f57_0bd2_4e4c_bbc8_39406c393d42.slice: Consumed 1.590s CPU time. Sep 13 00:25:22.105788 kubelet[2498]: I0913 00:25:22.105724 2498 scope.go:117] "RemoveContainer" containerID="df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194" Sep 13 00:25:22.129585 kubelet[2498]: I0913 00:25:22.128769 2498 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-calico-apiserver-certs\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:25:22.129585 kubelet[2498]: I0913 00:25:22.128831 2498 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mds4x\" (UniqueName: \"kubernetes.io/projected/c14c5f57-0bd2-4e4c-bbc8-39406c393d42-kube-api-access-mds4x\") on node \"ci-4081.3.5-n-9b8e9ee716\" DevicePath \"\"" Sep 13 00:25:22.171259 containerd[1465]: time="2025-09-13T00:25:22.171195065Z" level=info msg="RemoveContainer for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\"" Sep 13 00:25:22.178112 containerd[1465]: time="2025-09-13T00:25:22.178042466Z" level=info msg="RemoveContainer for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" returns successfully" Sep 13 00:25:22.186184 kubelet[2498]: I0913 00:25:22.185983 2498 scope.go:117] "RemoveContainer" containerID="df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194" Sep 13 00:25:22.231008 containerd[1465]: time="2025-09-13T00:25:22.205827513Z" level=error msg="ContainerStatus for \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\": not found" Sep 13 00:25:22.287391 kubelet[2498]: E0913 00:25:22.285733 2498 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\": not found" containerID="df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194" Sep 13 00:25:22.287391 kubelet[2498]: I0913 00:25:22.285890 2498 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194"} err="failed to get container status \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\": rpc error: code = NotFound desc = an error occurred when try to find container \"df77b454e72ffd51694b6a30742eff635d778e6a77ca937818f571e90d8cc194\": not found" Sep 13 00:25:22.456141 systemd[1]: var-lib-kubelet-pods-c14c5f57\x2d0bd2\x2d4e4c\x2dbbc8\x2d39406c393d42-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:25:23.243676 kubelet[2498]: E0913 00:25:23.243618 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:25:23.247167 kubelet[2498]: I0913 00:25:23.246943 2498 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14c5f57-0bd2-4e4c-bbc8-39406c393d42" path="/var/lib/kubelet/pods/c14c5f57-0bd2-4e4c-bbc8-39406c393d42/volumes" Sep 13 00:25:27.011818 systemd[1]: Started sshd@12-143.198.134.88:22-139.178.68.195:37896.service - OpenSSH per-connection server daemon (139.178.68.195:37896). Sep 13 00:25:27.168033 sshd[6285]: Accepted publickey for core from 139.178.68.195 port 37896 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:27.170603 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:27.180489 systemd-logind[1449]: New session 13 of user core. Sep 13 00:25:27.183742 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:25:27.672114 sshd[6285]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:27.677918 systemd[1]: sshd@12-143.198.134.88:22-139.178.68.195:37896.service: Deactivated successfully. Sep 13 00:25:27.681392 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:25:27.687681 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:25:27.689958 systemd-logind[1449]: Removed session 13. Sep 13 00:25:32.688847 systemd[1]: Started sshd@13-143.198.134.88:22-139.178.68.195:40790.service - OpenSSH per-connection server daemon (139.178.68.195:40790). Sep 13 00:25:32.823459 sshd[6308]: Accepted publickey for core from 139.178.68.195 port 40790 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:32.824684 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:32.829793 systemd-logind[1449]: New session 14 of user core. Sep 13 00:25:32.834775 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:25:33.279137 sshd[6308]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:33.287298 systemd[1]: sshd@13-143.198.134.88:22-139.178.68.195:40790.service: Deactivated successfully. Sep 13 00:25:33.291480 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:25:33.292972 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:25:33.294049 systemd-logind[1449]: Removed session 14. Sep 13 00:25:37.243711 kubelet[2498]: E0913 00:25:37.243664 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:25:38.242863 kubelet[2498]: E0913 00:25:38.242671 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:25:38.298824 systemd[1]: Started sshd@14-143.198.134.88:22-139.178.68.195:40806.service - OpenSSH per-connection server daemon (139.178.68.195:40806). Sep 13 00:25:38.366589 sshd[6327]: Accepted publickey for core from 139.178.68.195 port 40806 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:38.368674 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:38.375152 systemd-logind[1449]: New session 15 of user core. Sep 13 00:25:38.383714 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:25:38.568317 sshd[6327]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:38.573385 systemd[1]: sshd@14-143.198.134.88:22-139.178.68.195:40806.service: Deactivated successfully. Sep 13 00:25:38.577120 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:25:38.578176 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:25:38.579523 systemd-logind[1449]: Removed session 15. Sep 13 00:25:39.343624 systemd[1]: run-containerd-runc-k8s.io-baca81e689c15b577f57ee2d858415ffceaec69ee7d407ab3204c3bfeff88104-runc.LeviiH.mount: Deactivated successfully. Sep 13 00:25:43.584025 systemd[1]: Started sshd@15-143.198.134.88:22-139.178.68.195:46226.service - OpenSSH per-connection server daemon (139.178.68.195:46226). Sep 13 00:25:43.702194 sshd[6364]: Accepted publickey for core from 139.178.68.195 port 46226 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:43.703815 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:43.708745 systemd-logind[1449]: New session 16 of user core. Sep 13 00:25:43.719725 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:25:44.227393 sshd[6364]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:44.239590 systemd[1]: sshd@15-143.198.134.88:22-139.178.68.195:46226.service: Deactivated successfully. Sep 13 00:25:44.243737 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:25:44.247952 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:25:44.257854 systemd[1]: Started sshd@16-143.198.134.88:22-139.178.68.195:46240.service - OpenSSH per-connection server daemon (139.178.68.195:46240). Sep 13 00:25:44.260292 systemd-logind[1449]: Removed session 16. Sep 13 00:25:44.320244 sshd[6377]: Accepted publickey for core from 139.178.68.195 port 46240 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:44.322747 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:44.329334 systemd-logind[1449]: New session 17 of user core. Sep 13 00:25:44.334740 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:25:44.836988 sshd[6377]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:44.850171 systemd[1]: sshd@16-143.198.134.88:22-139.178.68.195:46240.service: Deactivated successfully. Sep 13 00:25:44.852789 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:25:44.856245 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:25:44.866227 systemd[1]: Started sshd@17-143.198.134.88:22-139.178.68.195:46242.service - OpenSSH per-connection server daemon (139.178.68.195:46242). Sep 13 00:25:44.868656 systemd-logind[1449]: Removed session 17. Sep 13 00:25:44.946141 sshd[6388]: Accepted publickey for core from 139.178.68.195 port 46242 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:44.947055 sshd[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:44.953800 systemd-logind[1449]: New session 18 of user core. Sep 13 00:25:44.961726 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:25:45.248342 kubelet[2498]: E0913 00:25:45.247267 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:25:45.838942 sshd[6388]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:45.854138 systemd[1]: sshd@17-143.198.134.88:22-139.178.68.195:46242.service: Deactivated successfully. Sep 13 00:25:45.859287 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:25:45.862613 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:25:45.872028 systemd[1]: Started sshd@18-143.198.134.88:22-139.178.68.195:46256.service - OpenSSH per-connection server daemon (139.178.68.195:46256). Sep 13 00:25:45.877042 systemd-logind[1449]: Removed session 18. Sep 13 00:25:45.987041 sshd[6428]: Accepted publickey for core from 139.178.68.195 port 46256 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:45.989665 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:45.996533 systemd-logind[1449]: New session 19 of user core. Sep 13 00:25:46.003776 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:25:46.821217 sshd[6428]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:46.838845 systemd[1]: sshd@18-143.198.134.88:22-139.178.68.195:46256.service: Deactivated successfully. Sep 13 00:25:46.843632 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:25:46.846851 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:25:46.859810 systemd[1]: Started sshd@19-143.198.134.88:22-139.178.68.195:46266.service - OpenSSH per-connection server daemon (139.178.68.195:46266). Sep 13 00:25:46.862863 systemd-logind[1449]: Removed session 19. Sep 13 00:25:46.932502 sshd[6459]: Accepted publickey for core from 139.178.68.195 port 46266 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:46.934975 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:46.945912 systemd-logind[1449]: New session 20 of user core. Sep 13 00:25:46.948767 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:25:47.106110 sshd[6459]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:47.110996 systemd[1]: sshd@19-143.198.134.88:22-139.178.68.195:46266.service: Deactivated successfully. Sep 13 00:25:47.113405 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:25:47.114918 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:25:47.115963 systemd-logind[1449]: Removed session 20. Sep 13 00:25:52.128978 systemd[1]: Started sshd@20-143.198.134.88:22-139.178.68.195:45718.service - OpenSSH per-connection server daemon (139.178.68.195:45718). Sep 13 00:25:52.193476 sshd[6493]: Accepted publickey for core from 139.178.68.195 port 45718 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:52.196142 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:52.201650 systemd-logind[1449]: New session 21 of user core. Sep 13 00:25:52.206051 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:25:52.399095 sshd[6493]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:52.404529 systemd[1]: sshd@20-143.198.134.88:22-139.178.68.195:45718.service: Deactivated successfully. Sep 13 00:25:52.409096 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:25:52.410785 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:25:52.414781 systemd-logind[1449]: Removed session 21. Sep 13 00:25:54.975895 systemd[1]: run-containerd-runc-k8s.io-94f43530bfaf47332dc7fa09135fbfeb3b7de77c20605892d05d715a22b4a18a-runc.2fX0hO.mount: Deactivated successfully. Sep 13 00:25:57.417809 systemd[1]: Started sshd@21-143.198.134.88:22-139.178.68.195:45730.service - OpenSSH per-connection server daemon (139.178.68.195:45730). Sep 13 00:25:57.519612 sshd[6529]: Accepted publickey for core from 139.178.68.195 port 45730 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:25:57.522573 sshd[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:57.528313 systemd-logind[1449]: New session 22 of user core. Sep 13 00:25:57.535724 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:25:57.874054 sshd[6529]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:57.878358 systemd[1]: sshd@21-143.198.134.88:22-139.178.68.195:45730.service: Deactivated successfully. Sep 13 00:25:57.883290 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:25:57.885767 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:25:57.887495 systemd-logind[1449]: Removed session 22. Sep 13 00:26:02.891679 systemd[1]: Started sshd@22-143.198.134.88:22-139.178.68.195:44128.service - OpenSSH per-connection server daemon (139.178.68.195:44128). Sep 13 00:26:02.948108 sshd[6544]: Accepted publickey for core from 139.178.68.195 port 44128 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:26:02.950106 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:02.955821 systemd-logind[1449]: New session 23 of user core. Sep 13 00:26:02.966726 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:26:03.268316 sshd[6544]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:03.276824 systemd[1]: sshd@22-143.198.134.88:22-139.178.68.195:44128.service: Deactivated successfully. Sep 13 00:26:03.282343 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:26:03.285258 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:26:03.286719 systemd-logind[1449]: Removed session 23. Sep 13 00:26:04.243135 kubelet[2498]: E0913 00:26:04.243021 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:26:07.243466 kubelet[2498]: E0913 00:26:07.242809 2498 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:26:08.291135 systemd[1]: Started sshd@23-143.198.134.88:22-139.178.68.195:44130.service - OpenSSH per-connection server daemon (139.178.68.195:44130). Sep 13 00:26:08.417503 sshd[6559]: Accepted publickey for core from 139.178.68.195 port 44130 ssh2: RSA SHA256:A0AAL4oOglAVyjHuH+5rrMM4QPePrEhraLgkZzfYjJc Sep 13 00:26:08.422654 sshd[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:08.432059 systemd-logind[1449]: New session 24 of user core. Sep 13 00:26:08.440723 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:26:09.245718 sshd[6559]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:09.253590 systemd[1]: sshd@23-143.198.134.88:22-139.178.68.195:44130.service: Deactivated successfully. Sep 13 00:26:09.257201 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:26:09.261747 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:26:09.264356 systemd-logind[1449]: Removed session 24. Sep 13 00:26:09.928308 kubelet[2498]: I0913 00:26:09.924466 2498 scope.go:117] "RemoveContainer" containerID="ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b" Sep 13 00:26:09.957195 containerd[1465]: time="2025-09-13T00:26:09.937141121Z" level=info msg="RemoveContainer for \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\"" Sep 13 00:26:09.982986 containerd[1465]: time="2025-09-13T00:26:09.982933810Z" level=info msg="RemoveContainer for \"ec6030d0e012b47a6cc3dec7bbce865c3f99f33bfe516e1df6b8fc07ecc2ee7b\" returns successfully" Sep 13 00:26:10.003756 containerd[1465]: time="2025-09-13T00:26:10.003715774Z" level=info msg="StopPodSandbox for \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\"" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.261 [WARNING][6580] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.263 [INFO][6580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.263 [INFO][6580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" iface="eth0" netns="" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.263 [INFO][6580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.263 [INFO][6580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.489 [INFO][6588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.493 [INFO][6588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.494 [INFO][6588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.511 [WARNING][6588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.511 [INFO][6588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.514 [INFO][6588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:10.522665 containerd[1465]: 2025-09-13 00:26:10.519 [INFO][6580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.539339 containerd[1465]: time="2025-09-13T00:26:10.539187829Z" level=info msg="TearDown network for sandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" successfully" Sep 13 00:26:10.539339 containerd[1465]: time="2025-09-13T00:26:10.539277263Z" level=info msg="StopPodSandbox for \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" returns successfully" Sep 13 00:26:10.544662 containerd[1465]: time="2025-09-13T00:26:10.544609272Z" level=info msg="RemovePodSandbox for \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\"" Sep 13 00:26:10.582891 containerd[1465]: time="2025-09-13T00:26:10.581837110Z" level=info msg="Forcibly stopping sandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\"" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.664 [WARNING][6602] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.664 [INFO][6602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.665 [INFO][6602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" iface="eth0" netns="" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.665 [INFO][6602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.665 [INFO][6602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.702 [INFO][6609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.702 [INFO][6609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.702 [INFO][6609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.727 [WARNING][6609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.727 [INFO][6609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" HandleID="k8s-pod-network.7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--2qnrs-eth0" Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.730 [INFO][6609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:10.738722 containerd[1465]: 2025-09-13 00:26:10.734 [INFO][6602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2" Sep 13 00:26:10.741989 containerd[1465]: time="2025-09-13T00:26:10.739411982Z" level=info msg="TearDown network for sandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" successfully" Sep 13 00:26:10.782340 containerd[1465]: time="2025-09-13T00:26:10.782111000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:10.782340 containerd[1465]: time="2025-09-13T00:26:10.782250206Z" level=info msg="RemovePodSandbox \"7214e5c6eca746ddc6f95b94eab27ac71e7811b59d2224012973e2264cb702f2\" returns successfully" Sep 13 00:26:10.786312 containerd[1465]: time="2025-09-13T00:26:10.785685423Z" level=info msg="StopPodSandbox for \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\"" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.894 [WARNING][6624] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.895 [INFO][6624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.895 [INFO][6624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" iface="eth0" netns="" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.895 [INFO][6624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.895 [INFO][6624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.932 [INFO][6631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.933 [INFO][6631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.933 [INFO][6631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.941 [WARNING][6631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.942 [INFO][6631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.944 [INFO][6631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:10.954684 containerd[1465]: 2025-09-13 00:26:10.949 [INFO][6624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:10.956136 containerd[1465]: time="2025-09-13T00:26:10.954744792Z" level=info msg="TearDown network for sandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" successfully" Sep 13 00:26:10.956136 containerd[1465]: time="2025-09-13T00:26:10.954778591Z" level=info msg="StopPodSandbox for \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" returns successfully" Sep 13 00:26:10.956136 containerd[1465]: time="2025-09-13T00:26:10.955359782Z" level=info msg="RemovePodSandbox for \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\"" Sep 13 00:26:10.956136 containerd[1465]: time="2025-09-13T00:26:10.955391209Z" level=info msg="Forcibly stopping sandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\"" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.047 [WARNING][6645] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" WorkloadEndpoint="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.047 [INFO][6645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.047 [INFO][6645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" iface="eth0" netns="" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.047 [INFO][6645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.047 [INFO][6645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.085 [INFO][6652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.085 [INFO][6652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.085 [INFO][6652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.101 [WARNING][6652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.102 [INFO][6652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" HandleID="k8s-pod-network.43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Workload="ci--4081.3.5--n--9b8e9ee716-k8s-calico--apiserver--66fc9d466c--fpvl5-eth0" Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.107 [INFO][6652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:11.118662 containerd[1465]: 2025-09-13 00:26:11.112 [INFO][6645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790" Sep 13 00:26:11.119354 containerd[1465]: time="2025-09-13T00:26:11.118640072Z" level=info msg="TearDown network for sandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" successfully" Sep 13 00:26:11.126541 containerd[1465]: time="2025-09-13T00:26:11.124711814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:11.126734 containerd[1465]: time="2025-09-13T00:26:11.126602348Z" level=info msg="RemovePodSandbox \"43a15dcb2e75a5026b8c1145e77ac1bc4e1e3a6bad1cfa85687b110c5ba89790\" returns successfully"