Sep 12 17:36:27.921951 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:36:27.921980 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:27.921993 kernel: BIOS-provided physical RAM map: Sep 12 17:36:27.922000 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:36:27.922006 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:36:27.922013 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:36:27.922021 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 17:36:27.922027 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 17:36:27.922034 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:36:27.922043 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:36:27.922050 kernel: NX (Execute Disable) protection: active Sep 12 17:36:27.922057 kernel: APIC: Static calls initialized Sep 12 17:36:27.922069 kernel: SMBIOS 2.8 present. Sep 12 17:36:27.922077 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 17:36:27.922085 kernel: Hypervisor detected: KVM Sep 12 17:36:27.922096 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:36:27.922107 kernel: kvm-clock: using sched offset of 2818023897 cycles Sep 12 17:36:27.924509 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:36:27.924537 kernel: tsc: Detected 2494.140 MHz processor Sep 12 17:36:27.924547 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:36:27.924556 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:36:27.924564 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 17:36:27.924572 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:36:27.924581 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:36:27.924595 kernel: ACPI: Early table checksum verification disabled Sep 12 17:36:27.924603 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 17:36:27.924611 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924619 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924627 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924635 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 17:36:27.924643 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924651 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924659 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924670 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:36:27.924678 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 17:36:27.924686 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 17:36:27.924694 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 17:36:27.924702 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 17:36:27.924709 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 17:36:27.924717 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 17:36:27.924729 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 17:36:27.924740 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:36:27.924748 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:36:27.924757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:36:27.924765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:36:27.924779 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 12 17:36:27.924788 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 12 17:36:27.924799 kernel: Zone ranges: Sep 12 17:36:27.924808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:36:27.924816 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 17:36:27.924825 kernel: Normal empty Sep 12 17:36:27.924833 kernel: Movable zone start for each node Sep 12 17:36:27.924841 kernel: Early memory node ranges Sep 12 17:36:27.924849 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:36:27.924858 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 17:36:27.924866 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 17:36:27.924877 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:36:27.924886 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:36:27.924897 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 17:36:27.924905 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:36:27.924914 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:36:27.924922 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:36:27.924930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:36:27.924939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:36:27.924947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:36:27.924958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:36:27.924966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:36:27.924974 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:36:27.924983 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:36:27.924991 kernel: TSC deadline timer available Sep 12 17:36:27.925000 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:36:27.925008 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:36:27.925017 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 17:36:27.925028 kernel: Booting paravirtualized kernel on KVM Sep 12 17:36:27.925036 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:36:27.925047 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:36:27.925056 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:36:27.925064 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:36:27.925072 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:36:27.925080 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:36:27.925090 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:27.925099 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:36:27.925107 kernel: random: crng init done Sep 12 17:36:27.925118 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:36:27.925126 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:36:27.925135 kernel: Fallback order for Node 0: 0 Sep 12 17:36:27.925143 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 12 17:36:27.925151 kernel: Policy zone: DMA32 Sep 12 17:36:27.925160 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:36:27.925168 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 12 17:36:27.925177 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:36:27.925188 kernel: Kernel/User page tables isolation: enabled Sep 12 17:36:27.925196 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:36:27.925204 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:36:27.925213 kernel: Dynamic Preempt: voluntary Sep 12 17:36:27.925221 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:36:27.925231 kernel: rcu: RCU event tracing is enabled. Sep 12 17:36:27.925239 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:36:27.925248 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:36:27.925256 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:36:27.925264 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:36:27.925276 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:36:27.925284 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:36:27.925292 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:36:27.925300 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:36:27.925311 kernel: Console: colour VGA+ 80x25 Sep 12 17:36:27.925320 kernel: printk: console [tty0] enabled Sep 12 17:36:27.925328 kernel: printk: console [ttyS0] enabled Sep 12 17:36:27.925337 kernel: ACPI: Core revision 20230628 Sep 12 17:36:27.925345 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:36:27.925356 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:36:27.925364 kernel: x2apic enabled Sep 12 17:36:27.925373 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:36:27.925381 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:36:27.925390 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 12 17:36:27.925398 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 12 17:36:27.925406 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:36:27.925415 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:36:27.925434 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:36:27.925443 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:36:27.925452 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:36:27.925461 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 17:36:27.926613 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:36:27.926628 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:36:27.926638 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:36:27.926648 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:36:27.926657 kernel: active return thunk: its_return_thunk Sep 12 17:36:27.926677 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:36:27.926687 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:36:27.926696 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:36:27.926705 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:36:27.926714 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:36:27.926723 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:36:27.926732 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:36:27.926742 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:36:27.926753 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:36:27.926764 kernel: landlock: Up and running. Sep 12 17:36:27.926778 kernel: SELinux: Initializing. Sep 12 17:36:27.926789 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:36:27.926798 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:36:27.926812 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 17:36:27.926822 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:27.926831 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:27.926840 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:36:27.926852 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 17:36:27.926861 kernel: signal: max sigframe size: 1776 Sep 12 17:36:27.926870 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:36:27.926879 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:36:27.926889 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:36:27.926898 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:36:27.926906 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:36:27.926915 kernel: .... node #0, CPUs: #1 Sep 12 17:36:27.926927 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:36:27.926939 kernel: smpboot: Max logical packages: 1 Sep 12 17:36:27.926948 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 12 17:36:27.926957 kernel: devtmpfs: initialized Sep 12 17:36:27.926966 kernel: x86/mm: Memory block size: 128MB Sep 12 17:36:27.926975 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:36:27.926984 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:36:27.926993 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:36:27.927001 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:36:27.927010 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:36:27.927022 kernel: audit: type=2000 audit(1757698586.364:1): state=initialized audit_enabled=0 res=1 Sep 12 17:36:27.927030 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:36:27.927039 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:36:27.927048 kernel: cpuidle: using governor menu Sep 12 17:36:27.927057 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:36:27.927066 kernel: dca service started, version 1.12.1 Sep 12 17:36:27.927075 kernel: PCI: Using configuration type 1 for base access Sep 12 17:36:27.927084 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:36:27.927093 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:36:27.927104 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:36:27.927113 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:36:27.927122 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:36:27.927131 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:36:27.927139 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:36:27.927148 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:36:27.927157 kernel: ACPI: Interpreter enabled Sep 12 17:36:27.927166 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:36:27.927175 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:36:27.927186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:36:27.927195 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:36:27.927204 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:36:27.927213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:36:27.927406 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:36:27.928581 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:36:27.928697 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:36:27.928715 kernel: acpiphp: Slot [3] registered Sep 12 17:36:27.928725 kernel: acpiphp: Slot [4] registered Sep 12 17:36:27.928735 kernel: acpiphp: Slot [5] registered Sep 12 17:36:27.928744 kernel: acpiphp: Slot [6] registered Sep 12 17:36:27.928752 kernel: acpiphp: Slot [7] registered Sep 12 17:36:27.928762 kernel: acpiphp: Slot [8] registered Sep 12 17:36:27.928770 kernel: acpiphp: Slot [9] registered Sep 12 17:36:27.928779 kernel: acpiphp: Slot [10] registered Sep 12 17:36:27.928788 kernel: acpiphp: Slot [11] registered Sep 12 17:36:27.928797 kernel: acpiphp: Slot [12] registered Sep 12 17:36:27.928809 kernel: acpiphp: Slot [13] registered Sep 12 17:36:27.928818 kernel: acpiphp: Slot [14] registered Sep 12 17:36:27.928827 kernel: acpiphp: Slot [15] registered Sep 12 17:36:27.928836 kernel: acpiphp: Slot [16] registered Sep 12 17:36:27.928845 kernel: acpiphp: Slot [17] registered Sep 12 17:36:27.928854 kernel: acpiphp: Slot [18] registered Sep 12 17:36:27.928863 kernel: acpiphp: Slot [19] registered Sep 12 17:36:27.928872 kernel: acpiphp: Slot [20] registered Sep 12 17:36:27.928880 kernel: acpiphp: Slot [21] registered Sep 12 17:36:27.928894 kernel: acpiphp: Slot [22] registered Sep 12 17:36:27.928909 kernel: acpiphp: Slot [23] registered Sep 12 17:36:27.928922 kernel: acpiphp: Slot [24] registered Sep 12 17:36:27.928934 kernel: acpiphp: Slot [25] registered Sep 12 17:36:27.928948 kernel: acpiphp: Slot [26] registered Sep 12 17:36:27.928962 kernel: acpiphp: Slot [27] registered Sep 12 17:36:27.928975 kernel: acpiphp: Slot [28] registered Sep 12 17:36:27.928988 kernel: acpiphp: Slot [29] registered Sep 12 17:36:27.929003 kernel: acpiphp: Slot [30] registered Sep 12 17:36:27.929016 kernel: acpiphp: Slot [31] registered Sep 12 17:36:27.929035 kernel: PCI host bridge to bus 0000:00 Sep 12 17:36:27.929170 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:36:27.929305 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:36:27.929400 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:36:27.930229 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:36:27.930344 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 17:36:27.930467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:36:27.930635 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:36:27.930745 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:36:27.930894 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 12 17:36:27.930994 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 12 17:36:27.931118 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:36:27.931242 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:36:27.931373 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:36:27.934560 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:36:27.934760 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 12 17:36:27.934867 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 12 17:36:27.934981 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:36:27.935083 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 17:36:27.935193 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 17:36:27.935303 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:36:27.935458 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 12 17:36:27.935664 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 17:36:27.935773 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 12 17:36:27.935871 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 12 17:36:27.935971 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:36:27.936096 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:36:27.936197 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 12 17:36:27.936307 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 12 17:36:27.936432 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 17:36:27.937647 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:36:27.937783 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 12 17:36:27.937885 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 12 17:36:27.937992 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 17:36:27.938103 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 12 17:36:27.938205 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 12 17:36:27.938303 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 12 17:36:27.938432 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 17:36:27.939143 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:36:27.939284 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:36:27.939407 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 12 17:36:27.939625 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 17:36:27.939747 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:36:27.939846 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 12 17:36:27.939944 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 12 17:36:27.940042 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 17:36:27.940165 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 12 17:36:27.940274 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 12 17:36:27.940373 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 17:36:27.940386 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:36:27.940396 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:36:27.940405 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:36:27.940414 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:36:27.940423 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:36:27.940436 kernel: iommu: Default domain type: Translated Sep 12 17:36:27.940445 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:36:27.940454 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:36:27.940463 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:36:27.940483 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:36:27.940492 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 17:36:27.940596 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 17:36:27.940699 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 17:36:27.940799 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:36:27.941740 kernel: vgaarb: loaded Sep 12 17:36:27.941750 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:36:27.941760 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:36:27.941769 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:36:27.941779 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:36:27.941789 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:36:27.941798 kernel: pnp: PnP ACPI init Sep 12 17:36:27.941807 kernel: pnp: PnP ACPI: found 4 devices Sep 12 17:36:27.941824 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:36:27.941833 kernel: NET: Registered PF_INET protocol family Sep 12 17:36:27.941842 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:36:27.941852 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:36:27.941861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:36:27.941870 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:36:27.941879 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:36:27.941888 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:36:27.941897 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:36:27.941910 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:36:27.941919 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:36:27.941928 kernel: NET: Registered PF_XDP protocol family Sep 12 17:36:27.942052 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:36:27.942141 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:36:27.942558 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:36:27.942649 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:36:27.942743 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 17:36:27.942858 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 17:36:27.942968 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:36:27.942982 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:36:27.943080 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 28538 usecs Sep 12 17:36:27.943093 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:36:27.943103 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:36:27.943112 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 12 17:36:27.943121 kernel: Initialise system trusted keyrings Sep 12 17:36:27.943131 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:36:27.943144 kernel: Key type asymmetric registered Sep 12 17:36:27.943153 kernel: Asymmetric key parser 'x509' registered Sep 12 17:36:27.943162 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:36:27.943171 kernel: io scheduler mq-deadline registered Sep 12 17:36:27.943180 kernel: io scheduler kyber registered Sep 12 17:36:27.943189 kernel: io scheduler bfq registered Sep 12 17:36:27.943198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:36:27.943207 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 17:36:27.943216 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:36:27.943229 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:36:27.943238 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:36:27.943247 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:36:27.943256 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:36:27.943265 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:36:27.943275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:36:27.943405 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:36:27.943418 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:36:27.945466 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:36:27.945623 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:36:27 UTC (1757698587) Sep 12 17:36:27.945738 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:36:27.945756 kernel: intel_pstate: CPU model not supported Sep 12 17:36:27.945766 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:36:27.945776 kernel: Segment Routing with IPv6 Sep 12 17:36:27.945785 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:36:27.945794 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:36:27.945803 kernel: Key type dns_resolver registered Sep 12 17:36:27.945820 kernel: IPI shorthand broadcast: enabled Sep 12 17:36:27.945830 kernel: sched_clock: Marking stable (871003028, 84335501)->(1038024427, -82685898) Sep 12 17:36:27.945839 kernel: registered taskstats version 1 Sep 12 17:36:27.945848 kernel: Loading compiled-in X.509 certificates Sep 12 17:36:27.945857 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:36:27.945866 kernel: Key type .fscrypt registered Sep 12 17:36:27.945875 kernel: Key type fscrypt-provisioning registered Sep 12 17:36:27.945884 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:36:27.945896 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:36:27.945906 kernel: ima: No architecture policies found Sep 12 17:36:27.945915 kernel: clk: Disabling unused clocks Sep 12 17:36:27.945924 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:36:27.945933 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:36:27.945961 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:36:27.945973 kernel: Run /init as init process Sep 12 17:36:27.945982 kernel: with arguments: Sep 12 17:36:27.945992 kernel: /init Sep 12 17:36:27.946004 kernel: with environment: Sep 12 17:36:27.946013 kernel: HOME=/ Sep 12 17:36:27.946022 kernel: TERM=linux Sep 12 17:36:27.946031 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:36:27.946044 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:36:27.946059 systemd[1]: Detected virtualization kvm. Sep 12 17:36:27.946074 systemd[1]: Detected architecture x86-64. Sep 12 17:36:27.946088 systemd[1]: Running in initrd. Sep 12 17:36:27.946106 systemd[1]: No hostname configured, using default hostname. Sep 12 17:36:27.946115 systemd[1]: Hostname set to . Sep 12 17:36:27.946125 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:36:27.946135 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:36:27.946145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:27.946157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:27.946168 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:36:27.946178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:36:27.946191 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:36:27.946201 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:36:27.946212 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:36:27.946222 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:36:27.946232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:27.946242 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:27.946252 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:36:27.946264 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:36:27.946274 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:36:27.946287 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:36:27.946297 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:36:27.946307 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:36:27.946319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:36:27.946329 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:36:27.946339 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:27.946349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:27.946359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:27.946369 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:36:27.946378 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:36:27.946394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:36:27.946404 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:36:27.946417 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:36:27.946427 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:36:27.946437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:36:27.946446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:27.946456 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:36:27.946555 systemd-journald[184]: Collecting audit messages is disabled. Sep 12 17:36:27.946587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:27.946597 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:36:27.946609 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:36:27.946622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:36:27.946632 kernel: Bridge firewalling registered Sep 12 17:36:27.946643 systemd-journald[184]: Journal started Sep 12 17:36:27.946664 systemd-journald[184]: Runtime Journal (/run/log/journal/d2bd1599263c429fbf630a1518c3619a) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:36:27.905582 systemd-modules-load[185]: Inserted module 'overlay' Sep 12 17:36:27.977684 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:36:27.944763 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 12 17:36:27.978265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:27.978870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:27.982342 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:27.989691 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:27.992398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:36:27.993670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:36:27.999209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:36:28.012118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:28.018596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:28.023703 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:28.029686 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:36:28.030902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:28.034623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:36:28.042845 dracut-cmdline[217]: dracut-dracut-053 Sep 12 17:36:28.049484 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:36:28.080805 systemd-resolved[221]: Positive Trust Anchors: Sep 12 17:36:28.080817 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:36:28.080853 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:36:28.084298 systemd-resolved[221]: Defaulting to hostname 'linux'. Sep 12 17:36:28.085867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:36:28.086323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:28.149500 kernel: SCSI subsystem initialized Sep 12 17:36:28.158528 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:36:28.169515 kernel: iscsi: registered transport (tcp) Sep 12 17:36:28.192515 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:36:28.192599 kernel: QLogic iSCSI HBA Driver Sep 12 17:36:28.249730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:36:28.254728 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:36:28.281575 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:36:28.281652 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:36:28.282761 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:36:28.325535 kernel: raid6: avx2x4 gen() 17304 MB/s Sep 12 17:36:28.341512 kernel: raid6: avx2x2 gen() 17355 MB/s Sep 12 17:36:28.358571 kernel: raid6: avx2x1 gen() 12946 MB/s Sep 12 17:36:28.358649 kernel: raid6: using algorithm avx2x2 gen() 17355 MB/s Sep 12 17:36:28.376588 kernel: raid6: .... xor() 20536 MB/s, rmw enabled Sep 12 17:36:28.376667 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:36:28.397509 kernel: xor: automatically using best checksumming function avx Sep 12 17:36:28.557626 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:36:28.569452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:36:28.576695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:28.595298 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 12 17:36:28.600705 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:28.608789 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:36:28.624513 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 12 17:36:28.661432 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:36:28.666739 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:36:28.728889 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:28.734823 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:36:28.761556 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:36:28.762871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:36:28.764328 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:28.764689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:36:28.771708 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:36:28.797546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:36:28.824495 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 17:36:28.828548 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:36:28.832499 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:36:28.846492 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 17:36:28.847797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:36:28.847917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:28.850559 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:28.850960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:28.851114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:28.852758 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:28.875774 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:36:28.875838 kernel: GPT:9289727 != 125829119 Sep 12 17:36:28.875852 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:36:28.875864 kernel: GPT:9289727 != 125829119 Sep 12 17:36:28.875875 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:36:28.875887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:28.878646 kernel: ACPI: bus type USB registered Sep 12 17:36:28.878703 kernel: usbcore: registered new interface driver usbfs Sep 12 17:36:28.879462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:28.892714 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 17:36:28.892960 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 12 17:36:28.894941 kernel: usbcore: registered new interface driver hub Sep 12 17:36:28.895005 kernel: usbcore: registered new device driver usb Sep 12 17:36:28.897492 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:36:28.902497 kernel: AES CTR mode by8 optimization enabled Sep 12 17:36:28.902552 kernel: libata version 3.00 loaded. Sep 12 17:36:28.916503 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 17:36:28.925498 kernel: scsi host1: ata_piix Sep 12 17:36:28.926491 kernel: scsi host2: ata_piix Sep 12 17:36:28.926741 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 12 17:36:28.926764 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 12 17:36:28.951511 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (458) Sep 12 17:36:28.956496 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Sep 12 17:36:28.970130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:28.977128 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:36:28.983401 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:36:28.984311 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:36:28.989809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:36:28.994465 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:36:29.000723 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:36:29.003880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:36:29.010026 disk-uuid[532]: Primary Header is updated. Sep 12 17:36:29.010026 disk-uuid[532]: Secondary Entries is updated. Sep 12 17:36:29.010026 disk-uuid[532]: Secondary Header is updated. Sep 12 17:36:29.024624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:29.028980 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:29.035528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:29.138508 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 17:36:29.143503 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 17:36:29.152942 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 17:36:29.153175 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 17:36:29.161497 kernel: hub 1-0:1.0: USB hub found Sep 12 17:36:29.164497 kernel: hub 1-0:1.0: 2 ports detected Sep 12 17:36:30.034517 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:36:30.034973 disk-uuid[533]: The operation has completed successfully. Sep 12 17:36:30.082839 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:36:30.082975 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:36:30.092748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:36:30.099284 sh[561]: Success Sep 12 17:36:30.117529 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:36:30.182600 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:36:30.186645 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:36:30.195094 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:36:30.207598 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:36:30.207702 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:30.209862 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:36:30.209983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:36:30.210651 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:36:30.222032 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:36:30.223765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:36:30.229906 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:36:30.238858 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:36:30.257824 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:30.257935 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:30.257950 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:36:30.263515 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:36:30.277780 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:36:30.278831 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:30.286717 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:36:30.291936 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:36:30.403118 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:36:30.412725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:36:30.445838 systemd-networkd[744]: lo: Link UP Sep 12 17:36:30.446459 systemd-networkd[744]: lo: Gained carrier Sep 12 17:36:30.451884 systemd-networkd[744]: Enumeration completed Sep 12 17:36:30.452317 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:36:30.452321 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 17:36:30.455742 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:36:30.458213 systemd[1]: Reached target network.target - Network. Sep 12 17:36:30.460636 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:30.460641 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:36:30.463518 systemd-networkd[744]: eth0: Link UP Sep 12 17:36:30.463605 systemd-networkd[744]: eth0: Gained carrier Sep 12 17:36:30.463621 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:36:30.468853 systemd-networkd[744]: eth1: Link UP Sep 12 17:36:30.468865 systemd-networkd[744]: eth1: Gained carrier Sep 12 17:36:30.468880 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:36:30.484580 systemd-networkd[744]: eth0: DHCPv4 address 159.223.198.129/20, gateway 159.223.192.1 acquired from 169.254.169.253 Sep 12 17:36:30.489145 ignition[666]: Ignition 2.19.0 Sep 12 17:36:30.489156 ignition[666]: Stage: fetch-offline Sep 12 17:36:30.490570 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Sep 12 17:36:30.489198 ignition[666]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:30.492520 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:36:30.489219 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:30.489322 ignition[666]: parsed url from cmdline: "" Sep 12 17:36:30.489326 ignition[666]: no config URL provided Sep 12 17:36:30.489332 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:36:30.489340 ignition[666]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:36:30.489349 ignition[666]: failed to fetch config: resource requires networking Sep 12 17:36:30.489610 ignition[666]: Ignition finished successfully Sep 12 17:36:30.506748 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:36:30.525920 ignition[754]: Ignition 2.19.0 Sep 12 17:36:30.525933 ignition[754]: Stage: fetch Sep 12 17:36:30.526126 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:30.526137 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:30.526259 ignition[754]: parsed url from cmdline: "" Sep 12 17:36:30.526263 ignition[754]: no config URL provided Sep 12 17:36:30.526268 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:36:30.526276 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:36:30.526301 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 17:36:30.542150 ignition[754]: GET result: OK Sep 12 17:36:30.542303 ignition[754]: parsing config with SHA512: c0885bd9e83f30c755ecf226bcac90f65f9c6acfaa8b480b26d8dfc1db61f0cc6164b0527aaa2ebb9e04da75e4f2b503c64eae69475e2c52e016bdf8af809f48 Sep 12 17:36:30.547339 unknown[754]: fetched base config from "system" Sep 12 17:36:30.547352 unknown[754]: fetched base config from "system" Sep 12 17:36:30.548400 ignition[754]: fetch: fetch complete Sep 12 17:36:30.547360 unknown[754]: fetched user config from "digitalocean" Sep 12 17:36:30.548415 ignition[754]: fetch: fetch passed Sep 12 17:36:30.548542 ignition[754]: Ignition finished successfully Sep 12 17:36:30.551766 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:36:30.558743 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:36:30.583227 ignition[760]: Ignition 2.19.0 Sep 12 17:36:30.583242 ignition[760]: Stage: kargs Sep 12 17:36:30.583516 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:30.583665 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:30.586758 ignition[760]: kargs: kargs passed Sep 12 17:36:30.586836 ignition[760]: Ignition finished successfully Sep 12 17:36:30.588338 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:36:30.596766 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:36:30.623355 ignition[766]: Ignition 2.19.0 Sep 12 17:36:30.624134 ignition[766]: Stage: disks Sep 12 17:36:30.624428 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:30.624442 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:30.629729 ignition[766]: disks: disks passed Sep 12 17:36:30.629819 ignition[766]: Ignition finished successfully Sep 12 17:36:30.632055 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:36:30.633384 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:36:30.633860 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:36:30.634898 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:36:30.636038 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:36:30.636950 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:36:30.642700 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:36:30.672870 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:36:30.675634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:36:30.681643 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:36:30.778504 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:36:30.780147 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:36:30.780992 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:36:30.786625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:36:30.790821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:36:30.793720 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 12 17:36:30.796826 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:36:30.798559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:36:30.805308 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (784) Sep 12 17:36:30.805339 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:30.805361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:30.805373 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:36:30.799318 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:36:30.809500 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:36:30.813707 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:36:30.814295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:36:30.818574 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:36:30.889689 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:36:30.891838 coreos-metadata[786]: Sep 12 17:36:30.891 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:36:30.900014 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:36:30.900931 coreos-metadata[787]: Sep 12 17:36:30.900 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:36:30.905054 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:36:30.905937 coreos-metadata[786]: Sep 12 17:36:30.905 INFO Fetch successful Sep 12 17:36:30.910855 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:36:30.913518 coreos-metadata[787]: Sep 12 17:36:30.913 INFO Fetch successful Sep 12 17:36:30.914436 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 12 17:36:30.914574 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 12 17:36:30.921716 coreos-metadata[787]: Sep 12 17:36:30.920 INFO wrote hostname ci-4081.3.6-9-b554e4f7b0 to /sysroot/etc/hostname Sep 12 17:36:30.923029 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:36:31.024690 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:36:31.029670 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:36:31.032689 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:36:31.043502 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:31.065109 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:36:31.072420 ignition[906]: INFO : Ignition 2.19.0 Sep 12 17:36:31.073212 ignition[906]: INFO : Stage: mount Sep 12 17:36:31.073585 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:31.073976 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:31.074896 ignition[906]: INFO : mount: mount passed Sep 12 17:36:31.075298 ignition[906]: INFO : Ignition finished successfully Sep 12 17:36:31.076193 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:36:31.081719 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:36:31.207945 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:36:31.214822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:36:31.223597 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (917) Sep 12 17:36:31.225974 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:36:31.226038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:36:31.226052 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:36:31.229497 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:36:31.232617 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:36:31.263194 ignition[933]: INFO : Ignition 2.19.0 Sep 12 17:36:31.264006 ignition[933]: INFO : Stage: files Sep 12 17:36:31.264564 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:31.264993 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:31.266658 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:36:31.267507 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:36:31.267507 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:36:31.269988 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:36:31.270713 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:36:31.271605 unknown[933]: wrote ssh authorized keys file for user: core Sep 12 17:36:31.272376 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:36:31.273612 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:36:31.274217 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:36:31.512682 systemd-networkd[744]: eth1: Gained IPv6LL Sep 12 17:36:31.742515 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:36:32.088627 systemd-networkd[744]: eth0: Gained IPv6LL Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:36:32.569532 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:36:32.575053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:36:32.860905 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:36:33.151087 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:36:33.151087 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:36:33.152464 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:36:33.152464 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:36:33.152464 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:36:33.152464 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:36:33.152464 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:36:33.156307 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:36:33.156307 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:36:33.156307 ignition[933]: INFO : files: files passed Sep 12 17:36:33.156307 ignition[933]: INFO : Ignition finished successfully Sep 12 17:36:33.154113 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:36:33.168646 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:36:33.171269 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:36:33.172135 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:36:33.172249 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:36:33.195557 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:36:33.195557 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:36:33.198015 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:36:33.199701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:36:33.200712 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:36:33.211857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:36:33.243848 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:36:33.243955 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:36:33.245565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:36:33.246128 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:36:33.247135 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:36:33.252704 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:36:33.269629 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:36:33.277693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:36:33.288964 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:33.289985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:33.290857 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:36:33.291233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:36:33.291365 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:36:33.292971 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:36:33.293424 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:36:33.294007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:36:33.294603 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:36:33.295240 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:36:33.295978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:36:33.296636 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:36:33.297311 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:36:33.297989 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:36:33.298615 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:36:33.299140 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:36:33.299269 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:36:33.300180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:33.300985 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:33.301629 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:36:33.301786 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:33.302397 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:36:33.302534 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:36:33.303347 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:36:33.303574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:36:33.304266 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:36:33.304407 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:36:33.304851 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:36:33.304952 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:36:33.311739 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:36:33.313667 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:36:33.315633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:36:33.316262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:33.317215 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:36:33.317733 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:36:33.323804 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:36:33.324359 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:36:33.328689 ignition[987]: INFO : Ignition 2.19.0 Sep 12 17:36:33.328689 ignition[987]: INFO : Stage: umount Sep 12 17:36:33.339609 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:36:33.339609 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:36:33.339609 ignition[987]: INFO : umount: umount passed Sep 12 17:36:33.339609 ignition[987]: INFO : Ignition finished successfully Sep 12 17:36:33.337514 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:36:33.337644 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:36:33.338544 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:36:33.338590 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:36:33.338935 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:36:33.338971 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:36:33.339287 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:36:33.339322 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:36:33.339835 systemd[1]: Stopped target network.target - Network. Sep 12 17:36:33.340108 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:36:33.340152 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:36:33.341293 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:36:33.342841 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:36:33.348556 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:33.350716 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:36:33.351021 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:36:33.351330 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:36:33.352628 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:36:33.354504 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:36:33.354590 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:36:33.355076 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:36:33.355147 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:36:33.356640 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:36:33.356705 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:36:33.357584 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:36:33.358239 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:36:33.360159 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:36:33.360733 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:36:33.360823 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:36:33.361520 systemd-networkd[744]: eth1: DHCPv6 lease lost Sep 12 17:36:33.362169 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:36:33.362258 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:36:33.365545 systemd-networkd[744]: eth0: DHCPv6 lease lost Sep 12 17:36:33.367260 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:36:33.367382 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:36:33.368741 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:36:33.368850 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:36:33.372203 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:36:33.372276 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:33.381655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:36:33.382636 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:36:33.382718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:36:33.383647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:36:33.383712 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:33.385591 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:36:33.385655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:33.386044 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:36:33.386100 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:33.386796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:33.398134 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:36:33.398299 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:36:33.401194 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:36:33.401365 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:33.402316 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:36:33.402361 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:33.402868 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:36:33.402903 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:33.403571 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:36:33.403621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:36:33.404582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:36:33.404642 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:36:33.405680 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:36:33.405725 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:36:33.411709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:36:33.412077 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:36:33.412135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:33.412521 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:36:33.412561 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:33.412912 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:36:33.412949 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:33.413289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:33.413324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:33.418801 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:36:33.419414 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:36:33.420507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:36:33.431750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:36:33.440968 systemd[1]: Switching root. Sep 12 17:36:33.493927 systemd-journald[184]: Journal stopped Sep 12 17:36:34.704420 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 12 17:36:34.704527 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:36:34.704553 kernel: SELinux: policy capability open_perms=1 Sep 12 17:36:34.704579 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:36:34.704605 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:36:34.704626 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:36:34.704656 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:36:34.704682 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:36:34.704703 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:36:34.704722 kernel: audit: type=1403 audit(1757698593.647:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:36:34.704743 systemd[1]: Successfully loaded SELinux policy in 46.491ms. Sep 12 17:36:34.704772 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.380ms. Sep 12 17:36:34.704799 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:36:34.704822 systemd[1]: Detected virtualization kvm. Sep 12 17:36:34.704844 systemd[1]: Detected architecture x86-64. Sep 12 17:36:34.704871 systemd[1]: Detected first boot. Sep 12 17:36:34.704897 systemd[1]: Hostname set to . Sep 12 17:36:34.704918 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:36:34.704938 zram_generator::config[1030]: No configuration found. Sep 12 17:36:34.704961 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:36:34.704979 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:36:34.705000 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:36:34.705020 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:36:34.705041 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:36:34.705063 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:36:34.705083 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:36:34.705105 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:36:34.705133 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:36:34.705156 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:36:34.705176 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:36:34.705194 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:36:34.705215 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:36:34.705238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:36:34.705260 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:36:34.705281 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:36:34.705301 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:36:34.705322 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:36:34.705345 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:36:34.705365 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:36:34.705386 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:36:34.705408 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:36:34.705428 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:36:34.705450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:36:34.706705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:36:34.706743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:36:34.706767 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:36:34.706790 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:36:34.706813 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:36:34.706835 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:36:34.706858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:36:34.706880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:36:34.706902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:36:34.706924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:36:34.706953 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:36:34.706975 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:36:34.706998 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:36:34.707021 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:34.707043 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:36:34.707065 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:36:34.707088 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:36:34.707112 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:36:34.707139 systemd[1]: Reached target machines.target - Containers. Sep 12 17:36:34.707162 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:36:34.707182 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:34.707205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:36:34.707227 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:36:34.707256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:36:34.707278 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:36:34.707300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:36:34.707325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:36:34.707347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:36:34.707371 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:36:34.707394 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:36:34.707416 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:36:34.707439 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:36:34.707491 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:36:34.707515 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:36:34.707536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:36:34.707564 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:36:34.707587 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:36:34.707609 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:36:34.707631 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:36:34.707654 systemd[1]: Stopped verity-setup.service. Sep 12 17:36:34.707678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:34.707700 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:36:34.707723 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:36:34.707745 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:36:34.707773 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:36:34.707795 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:36:34.707817 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:36:34.707839 kernel: ACPI: bus type drm_connector registered Sep 12 17:36:34.707865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:36:34.707892 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:36:34.707915 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:36:34.707937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:36:34.707960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:36:34.707982 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:36:34.708008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:36:34.708032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:36:34.708055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:36:34.708079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:36:34.708100 kernel: fuse: init (API version 7.39) Sep 12 17:36:34.708118 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:36:34.708138 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:36:34.708160 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:36:34.708185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:36:34.708208 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:36:34.708229 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:36:34.708251 kernel: loop: module loaded Sep 12 17:36:34.708273 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:36:34.708298 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:36:34.708321 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:36:34.708344 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:36:34.708409 systemd-journald[1100]: Collecting audit messages is disabled. Sep 12 17:36:34.708456 systemd-journald[1100]: Journal started Sep 12 17:36:34.709633 systemd-journald[1100]: Runtime Journal (/run/log/journal/d2bd1599263c429fbf630a1518c3619a) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:36:34.709718 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:36:34.378635 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:36:34.398925 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:36:34.399363 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:36:34.713587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:36:34.716517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:34.726576 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:36:34.726643 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:36:34.730651 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:36:34.739496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:36:34.747581 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:36:34.747656 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:36:34.753505 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:36:34.755274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:36:34.756383 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:36:34.756657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:36:34.757819 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:36:34.758385 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:36:34.758980 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:36:34.765262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:36:34.780950 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:36:34.788711 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:36:34.793707 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:36:34.794157 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:36:34.837738 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:36:34.839929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:36:34.860081 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:36:34.861887 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:36:34.867669 systemd-journald[1100]: Time spent on flushing to /var/log/journal/d2bd1599263c429fbf630a1518c3619a is 37.443ms for 995 entries. Sep 12 17:36:34.867669 systemd-journald[1100]: System Journal (/var/log/journal/d2bd1599263c429fbf630a1518c3619a) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:36:34.917936 systemd-journald[1100]: Received client request to flush runtime journal. Sep 12 17:36:34.918005 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:36:34.918059 kernel: loop1: detected capacity change from 0 to 229808 Sep 12 17:36:34.919730 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Sep 12 17:36:34.919750 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Sep 12 17:36:34.922981 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:36:34.949935 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:36:34.958866 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:36:34.971535 kernel: loop2: detected capacity change from 0 to 8 Sep 12 17:36:34.984910 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:36:34.999878 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:36:35.007509 kernel: loop3: detected capacity change from 0 to 140768 Sep 12 17:36:35.051509 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:36:35.065989 kernel: loop4: detected capacity change from 0 to 142488 Sep 12 17:36:35.066255 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:36:35.068179 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:36:35.087513 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 17:36:35.102610 kernel: loop6: detected capacity change from 0 to 8 Sep 12 17:36:35.107514 kernel: loop7: detected capacity change from 0 to 140768 Sep 12 17:36:35.117154 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 12 17:36:35.117176 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 12 17:36:35.120609 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 17:36:35.121119 (sd-merge)[1176]: Merged extensions into '/usr'. Sep 12 17:36:35.125922 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:36:35.133285 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:36:35.133301 systemd[1]: Reloading... Sep 12 17:36:35.277498 zram_generator::config[1201]: No configuration found. Sep 12 17:36:35.363818 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:36:35.477941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:36:35.528298 systemd[1]: Reloading finished in 394 ms. Sep 12 17:36:35.553363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:36:35.554179 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:36:35.567832 systemd[1]: Starting ensure-sysext.service... Sep 12 17:36:35.572353 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:36:35.579313 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:36:35.579332 systemd[1]: Reloading... Sep 12 17:36:35.655762 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:36:35.656177 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:36:35.660261 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:36:35.661718 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 17:36:35.662733 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 17:36:35.669623 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:36:35.670641 systemd-tmpfiles[1248]: Skipping /boot Sep 12 17:36:35.704572 zram_generator::config[1274]: No configuration found. Sep 12 17:36:35.702703 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:36:35.702711 systemd-tmpfiles[1248]: Skipping /boot Sep 12 17:36:35.882001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:36:35.933267 systemd[1]: Reloading finished in 353 ms. Sep 12 17:36:35.951154 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:36:35.952366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:36:35.969732 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:36:35.979857 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:36:35.983684 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:36:35.988509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:36:35.997039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:36:36.000128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:36:36.009444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.009653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:36.016841 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:36:36.020910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:36:36.025774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:36:36.026306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:36.026423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.029197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.029408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:36.030172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:36.040637 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:36:36.041022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.043846 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.044071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:36.046005 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:36:36.047032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:36.047183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.053597 systemd[1]: Finished ensure-sysext.service. Sep 12 17:36:36.065868 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:36:36.102830 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:36:36.103010 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:36:36.105866 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:36:36.115983 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:36:36.116169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:36:36.127245 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:36:36.128074 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:36:36.128244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:36:36.130777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:36:36.131622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:36:36.134927 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:36:36.135871 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:36:36.139049 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:36:36.139161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:36:36.145845 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:36:36.146984 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:36:36.167105 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Sep 12 17:36:36.177540 augenrules[1361]: No rules Sep 12 17:36:36.180661 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:36:36.181427 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:36:36.215701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:36:36.227845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:36:36.278387 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:36:36.279811 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:36:36.333666 systemd-resolved[1323]: Positive Trust Anchors: Sep 12 17:36:36.333695 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:36:36.333734 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:36:36.344014 systemd-resolved[1323]: Using system hostname 'ci-4081.3.6-9-b554e4f7b0'. Sep 12 17:36:36.360759 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 17:36:36.361398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.361507 systemd-networkd[1374]: lo: Link UP Sep 12 17:36:36.361513 systemd-networkd[1374]: lo: Gained carrier Sep 12 17:36:36.361663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:36:36.362362 systemd-networkd[1374]: Enumeration completed Sep 12 17:36:36.370857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:36:36.379762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:36:36.385295 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:36:36.387811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:36:36.387889 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:36:36.387917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:36:36.388171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:36:36.389406 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:36:36.390940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:36:36.394622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:36:36.395532 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:36:36.400058 systemd[1]: Reached target network.target - Network. Sep 12 17:36:36.401635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:36:36.414823 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:36:36.419961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:36:36.420522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:36:36.420710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:36:36.421978 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:36:36.422427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:36:36.427605 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 17:36:36.434593 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 17:36:36.440350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:36:36.444673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1377) Sep 12 17:36:36.503544 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:36:36.511541 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 17:36:36.519977 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:36:36.516663 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-8a:ff:dc:a7:f1:d1.network. Sep 12 17:36:36.517305 systemd-networkd[1374]: eth1: Link UP Sep 12 17:36:36.517309 systemd-networkd[1374]: eth1: Gained carrier Sep 12 17:36:36.522185 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:36.522292 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:36.549571 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:36:36.563078 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-a6:27:7c:a1:c2:9f.network. Sep 12 17:36:36.564277 systemd-networkd[1374]: eth0: Link UP Sep 12 17:36:36.564287 systemd-networkd[1374]: eth0: Gained carrier Sep 12 17:36:36.566216 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:36.569189 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:36.569901 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:36.617516 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:36:36.618323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:36:36.632962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:36:36.646059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:36.674540 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:36:36.697533 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 17:36:36.702774 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 17:36:36.714445 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:36:36.714638 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:36:36.714664 kernel: [drm] features: -context_init Sep 12 17:36:36.714698 kernel: [drm] number of scanouts: 1 Sep 12 17:36:36.714721 kernel: [drm] number of cap sets: 0 Sep 12 17:36:36.730556 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 12 17:36:36.740314 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:36:36.740386 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:36:36.747500 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:36:36.760913 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:36.761395 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:36.775747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:36.783730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:36:36.783931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:36.801834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:36:36.834588 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:36:36.855098 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:36:36.862857 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:36:36.880463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:36:36.889821 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:36:36.920729 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:36:36.922130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:36:36.922238 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:36:36.922405 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:36:36.922538 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:36:36.922781 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:36:36.922973 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:36:36.923048 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:36:36.923108 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:36:36.923131 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:36:36.923178 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:36:36.925543 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:36:36.928768 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:36:36.936278 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:36:36.939697 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:36:36.941683 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:36:36.943130 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:36:36.943713 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:36:36.944185 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:36:36.944221 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:36:36.947662 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:36:36.951081 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:36:36.958725 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:36:36.969690 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:36:36.979809 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:36:36.986680 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:36:36.989894 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:36:36.997757 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:36:37.000529 jq[1441]: false Sep 12 17:36:37.006708 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:36:37.009547 coreos-metadata[1437]: Sep 12 17:36:37.008 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:36:37.011737 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:36:37.018841 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:36:37.021856 coreos-metadata[1437]: Sep 12 17:36:37.021 INFO Fetch successful Sep 12 17:36:37.031891 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:36:37.033560 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:36:37.034851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:36:37.039713 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:36:37.042666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:36:37.046546 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:36:37.050251 dbus-daemon[1438]: [system] SELinux support is enabled Sep 12 17:36:37.052728 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:36:37.069006 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:36:37.069274 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:36:37.069622 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:36:37.069766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:36:37.087552 jq[1449]: true Sep 12 17:36:37.090218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:36:37.090296 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:36:37.092392 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:36:37.092515 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 17:36:37.092538 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:36:37.106548 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:36:37.106809 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:36:37.138016 tar[1465]: linux-amd64/LICENSE Sep 12 17:36:37.138016 tar[1465]: linux-amd64/helm Sep 12 17:36:37.138330 extend-filesystems[1442]: Found loop4 Sep 12 17:36:37.138330 extend-filesystems[1442]: Found loop5 Sep 12 17:36:37.138330 extend-filesystems[1442]: Found loop6 Sep 12 17:36:37.138330 extend-filesystems[1442]: Found loop7 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda1 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda2 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda3 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found usr Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda4 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda6 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda7 Sep 12 17:36:37.163817 extend-filesystems[1442]: Found vda9 Sep 12 17:36:37.163817 extend-filesystems[1442]: Checking size of /dev/vda9 Sep 12 17:36:37.204124 update_engine[1448]: I20250912 17:36:37.142856 1448 main.cc:92] Flatcar Update Engine starting Sep 12 17:36:37.204124 update_engine[1448]: I20250912 17:36:37.157337 1448 update_check_scheduler.cc:74] Next update check in 6m9s Sep 12 17:36:37.148377 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:36:37.153185 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:36:37.212069 jq[1468]: true Sep 12 17:36:37.157166 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:36:37.157829 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:36:37.174406 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:36:37.237233 extend-filesystems[1442]: Resized partition /dev/vda9 Sep 12 17:36:37.248036 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:36:37.255056 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 17:36:37.264029 systemd-logind[1447]: New seat seat0. Sep 12 17:36:37.271921 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:36:37.274911 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:36:37.277792 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:36:37.301508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1372) Sep 12 17:36:37.368730 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:36:37.370380 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:36:37.382879 systemd[1]: Starting sshkeys.service... Sep 12 17:36:37.422423 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 17:36:37.428183 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:36:37.446916 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:36:37.468672 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:36:37.468672 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 17:36:37.468672 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 17:36:37.474577 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Sep 12 17:36:37.474577 extend-filesystems[1442]: Found vdb Sep 12 17:36:37.471678 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:36:37.471880 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:36:37.519065 coreos-metadata[1505]: Sep 12 17:36:37.519 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:36:37.532382 coreos-metadata[1505]: Sep 12 17:36:37.532 INFO Fetch successful Sep 12 17:36:37.552513 unknown[1505]: wrote ssh authorized keys file for user: core Sep 12 17:36:37.581357 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:36:37.584386 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:36:37.598897 update-ssh-keys[1515]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:36:37.598889 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:36:37.602545 systemd[1]: Finished sshkeys.service. Sep 12 17:36:37.653382 containerd[1476]: time="2025-09-12T17:36:37.653274677Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:36:37.668892 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:36:37.680810 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:36:37.697543 containerd[1476]: time="2025-09-12T17:36:37.697248743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.700840 containerd[1476]: time="2025-09-12T17:36:37.700786622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.701762920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.701800739Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.701968294Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.701987790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702047142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702061247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702249714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702266053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702286825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702299347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.702502 containerd[1476]: time="2025-09-12T17:36:37.702396590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.703090 containerd[1476]: time="2025-09-12T17:36:37.703063438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:36:37.703290 containerd[1476]: time="2025-09-12T17:36:37.703272175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:36:37.704208 containerd[1476]: time="2025-09-12T17:36:37.704180731Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:36:37.704387 containerd[1476]: time="2025-09-12T17:36:37.704370898Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:36:37.704569 containerd[1476]: time="2025-09-12T17:36:37.704553620Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:36:37.706456 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:36:37.707353 containerd[1476]: time="2025-09-12T17:36:37.707299860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:36:37.707502 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:36:37.707904 containerd[1476]: time="2025-09-12T17:36:37.707458363Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:36:37.707904 containerd[1476]: time="2025-09-12T17:36:37.707662968Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:36:37.707904 containerd[1476]: time="2025-09-12T17:36:37.707682754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:36:37.707904 containerd[1476]: time="2025-09-12T17:36:37.707699983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:36:37.707904 containerd[1476]: time="2025-09-12T17:36:37.707844566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:36:37.708351 containerd[1476]: time="2025-09-12T17:36:37.708318231Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:36:37.708535 containerd[1476]: time="2025-09-12T17:36:37.708519119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708587588Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708604076Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708619871Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708634456Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708646854Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708661117Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708675179Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708688659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708700762Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708713217Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708733571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708746738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708758465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709160 containerd[1476]: time="2025-09-12T17:36:37.708791763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708803427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708834509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708848638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708861698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708873649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708887496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709610 containerd[1476]: time="2025-09-12T17:36:37.708899667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709949 containerd[1476]: time="2025-09-12T17:36:37.709773414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709949 containerd[1476]: time="2025-09-12T17:36:37.709807356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709949 containerd[1476]: time="2025-09-12T17:36:37.709846836Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:36:37.709949 containerd[1476]: time="2025-09-12T17:36:37.709875997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709949 containerd[1476]: time="2025-09-12T17:36:37.709888097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.709949 containerd[1476]: time="2025-09-12T17:36:37.709898347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:36:37.710207 containerd[1476]: time="2025-09-12T17:36:37.710100585Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:36:37.710207 containerd[1476]: time="2025-09-12T17:36:37.710124533Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:36:37.710207 containerd[1476]: time="2025-09-12T17:36:37.710135436Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:36:37.710880 containerd[1476]: time="2025-09-12T17:36:37.710299305Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:36:37.710880 containerd[1476]: time="2025-09-12T17:36:37.710314524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.710880 containerd[1476]: time="2025-09-12T17:36:37.710327570Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:36:37.710880 containerd[1476]: time="2025-09-12T17:36:37.710342104Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:36:37.710880 containerd[1476]: time="2025-09-12T17:36:37.710352432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:36:37.711014 containerd[1476]: time="2025-09-12T17:36:37.710651817Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:36:37.711014 containerd[1476]: time="2025-09-12T17:36:37.710711485Z" level=info msg="Connect containerd service" Sep 12 17:36:37.711014 containerd[1476]: time="2025-09-12T17:36:37.710749951Z" level=info msg="using legacy CRI server" Sep 12 17:36:37.711014 containerd[1476]: time="2025-09-12T17:36:37.710757023Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:36:37.711293 containerd[1476]: time="2025-09-12T17:36:37.711272710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:36:37.712041 containerd[1476]: time="2025-09-12T17:36:37.712016308Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:36:37.712239 containerd[1476]: time="2025-09-12T17:36:37.712205757Z" level=info msg="Start subscribing containerd event" Sep 12 17:36:37.712325 containerd[1476]: time="2025-09-12T17:36:37.712313185Z" level=info msg="Start recovering state" Sep 12 17:36:37.712537 containerd[1476]: time="2025-09-12T17:36:37.712522777Z" level=info msg="Start event monitor" Sep 12 17:36:37.712713 containerd[1476]: time="2025-09-12T17:36:37.712589745Z" level=info msg="Start snapshots syncer" Sep 12 17:36:37.712713 containerd[1476]: time="2025-09-12T17:36:37.712602226Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:36:37.712713 containerd[1476]: time="2025-09-12T17:36:37.712610092Z" level=info msg="Start streaming server" Sep 12 17:36:37.713189 containerd[1476]: time="2025-09-12T17:36:37.713171690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:36:37.713330 containerd[1476]: time="2025-09-12T17:36:37.713285244Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:36:37.717150 containerd[1476]: time="2025-09-12T17:36:37.716227071Z" level=info msg="containerd successfully booted in 0.063990s" Sep 12 17:36:37.719153 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:36:37.722200 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:36:37.750877 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:36:37.758961 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:36:37.772004 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:36:37.772970 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:36:38.030789 tar[1465]: linux-amd64/README.md Sep 12 17:36:38.049924 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:36:38.296710 systemd-networkd[1374]: eth1: Gained IPv6LL Sep 12 17:36:38.297173 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:38.299563 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:36:38.302323 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:36:38.311769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:38.315908 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:36:38.347077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:36:38.362605 systemd-networkd[1374]: eth0: Gained IPv6LL Sep 12 17:36:38.363138 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:39.328279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:39.329814 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:36:39.333584 systemd[1]: Startup finished in 1.003s (kernel) + 5.949s (initrd) + 5.731s (userspace) = 12.684s. Sep 12 17:36:39.339208 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:36:39.966702 kubelet[1560]: E0912 17:36:39.966623 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:36:39.969598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:36:39.969838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:36:39.970335 systemd[1]: kubelet.service: Consumed 1.274s CPU time. Sep 12 17:36:41.994411 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:36:42.000821 systemd[1]: Started sshd@0-159.223.198.129:22-147.75.109.163:60684.service - OpenSSH per-connection server daemon (147.75.109.163:60684). Sep 12 17:36:42.064663 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 60684 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:42.066894 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:42.076923 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:36:42.081804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:36:42.084751 systemd-logind[1447]: New session 1 of user core. Sep 12 17:36:42.105025 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:36:42.110832 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:36:42.121688 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:36:42.248182 systemd[1576]: Queued start job for default target default.target. Sep 12 17:36:42.258806 systemd[1576]: Created slice app.slice - User Application Slice. Sep 12 17:36:42.259019 systemd[1576]: Reached target paths.target - Paths. Sep 12 17:36:42.259150 systemd[1576]: Reached target timers.target - Timers. Sep 12 17:36:42.261075 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:36:42.276179 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:36:42.276386 systemd[1576]: Reached target sockets.target - Sockets. Sep 12 17:36:42.276412 systemd[1576]: Reached target basic.target - Basic System. Sep 12 17:36:42.276498 systemd[1576]: Reached target default.target - Main User Target. Sep 12 17:36:42.276545 systemd[1576]: Startup finished in 146ms. Sep 12 17:36:42.276740 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:36:42.287896 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:36:42.366621 systemd[1]: Started sshd@1-159.223.198.129:22-147.75.109.163:60700.service - OpenSSH per-connection server daemon (147.75.109.163:60700). Sep 12 17:36:42.419450 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 60700 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:42.421641 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:42.428883 systemd-logind[1447]: New session 2 of user core. Sep 12 17:36:42.441855 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:36:42.508229 sshd[1587]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:42.522544 systemd[1]: sshd@1-159.223.198.129:22-147.75.109.163:60700.service: Deactivated successfully. Sep 12 17:36:42.525193 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:36:42.527907 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:36:42.532045 systemd[1]: Started sshd@2-159.223.198.129:22-147.75.109.163:60714.service - OpenSSH per-connection server daemon (147.75.109.163:60714). Sep 12 17:36:42.534688 systemd-logind[1447]: Removed session 2. Sep 12 17:36:42.589136 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 60714 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:42.591402 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:42.597501 systemd-logind[1447]: New session 3 of user core. Sep 12 17:36:42.604784 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:36:42.662657 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:42.676447 systemd[1]: sshd@2-159.223.198.129:22-147.75.109.163:60714.service: Deactivated successfully. Sep 12 17:36:42.679446 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:36:42.682893 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:36:42.690179 systemd[1]: Started sshd@3-159.223.198.129:22-147.75.109.163:60722.service - OpenSSH per-connection server daemon (147.75.109.163:60722). Sep 12 17:36:42.692792 systemd-logind[1447]: Removed session 3. Sep 12 17:36:42.731605 sshd[1602]: Accepted publickey for core from 147.75.109.163 port 60722 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:42.734371 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:42.740817 systemd-logind[1447]: New session 4 of user core. Sep 12 17:36:42.748884 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:36:42.816604 sshd[1602]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:42.834404 systemd[1]: sshd@3-159.223.198.129:22-147.75.109.163:60722.service: Deactivated successfully. Sep 12 17:36:42.837703 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:36:42.842724 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:36:42.848244 systemd[1]: Started sshd@4-159.223.198.129:22-147.75.109.163:60734.service - OpenSSH per-connection server daemon (147.75.109.163:60734). Sep 12 17:36:42.850372 systemd-logind[1447]: Removed session 4. Sep 12 17:36:42.890087 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 60734 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:42.891847 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:42.896811 systemd-logind[1447]: New session 5 of user core. Sep 12 17:36:42.905830 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:36:42.976182 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:36:42.977054 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:42.992299 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 12 17:36:42.996054 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:43.004422 systemd[1]: sshd@4-159.223.198.129:22-147.75.109.163:60734.service: Deactivated successfully. Sep 12 17:36:43.006391 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:36:43.008352 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:36:43.009823 systemd[1]: Started sshd@5-159.223.198.129:22-147.75.109.163:60742.service - OpenSSH per-connection server daemon (147.75.109.163:60742). Sep 12 17:36:43.011813 systemd-logind[1447]: Removed session 5. Sep 12 17:36:43.058768 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 60742 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:43.060566 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:43.065825 systemd-logind[1447]: New session 6 of user core. Sep 12 17:36:43.070641 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:36:43.129617 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:36:43.129968 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:43.135001 sudo[1621]: pam_unix(sudo:session): session closed for user root Sep 12 17:36:43.142812 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:36:43.143107 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:43.158861 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:36:43.162615 auditctl[1624]: No rules Sep 12 17:36:43.162999 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:36:43.163188 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:36:43.171135 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:36:43.199903 augenrules[1642]: No rules Sep 12 17:36:43.201676 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:36:43.203163 sudo[1620]: pam_unix(sudo:session): session closed for user root Sep 12 17:36:43.207110 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:43.219643 systemd[1]: sshd@5-159.223.198.129:22-147.75.109.163:60742.service: Deactivated successfully. Sep 12 17:36:43.222917 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:36:43.223728 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:36:43.229956 systemd[1]: Started sshd@6-159.223.198.129:22-147.75.109.163:60744.service - OpenSSH per-connection server daemon (147.75.109.163:60744). Sep 12 17:36:43.231667 systemd-logind[1447]: Removed session 6. Sep 12 17:36:43.280254 sshd[1650]: Accepted publickey for core from 147.75.109.163 port 60744 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:43.282062 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:43.288127 systemd-logind[1447]: New session 7 of user core. Sep 12 17:36:43.301765 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:36:43.360004 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:36:43.360310 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:36:43.814819 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:36:43.824003 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:36:44.237447 dockerd[1670]: time="2025-09-12T17:36:44.236607692Z" level=info msg="Starting up" Sep 12 17:36:44.355580 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1166259136-merged.mount: Deactivated successfully. Sep 12 17:36:44.376897 dockerd[1670]: time="2025-09-12T17:36:44.376658220Z" level=info msg="Loading containers: start." Sep 12 17:36:44.490510 kernel: Initializing XFRM netlink socket Sep 12 17:36:44.517889 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:44.523038 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:44.529756 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:44.575458 systemd-networkd[1374]: docker0: Link UP Sep 12 17:36:44.576265 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Sep 12 17:36:44.587115 dockerd[1670]: time="2025-09-12T17:36:44.587053766Z" level=info msg="Loading containers: done." Sep 12 17:36:44.604552 dockerd[1670]: time="2025-09-12T17:36:44.603734638Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:36:44.604552 dockerd[1670]: time="2025-09-12T17:36:44.603856632Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:36:44.604552 dockerd[1670]: time="2025-09-12T17:36:44.603958550Z" level=info msg="Daemon has completed initialization" Sep 12 17:36:44.635405 dockerd[1670]: time="2025-09-12T17:36:44.635267108Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:36:44.635705 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:36:45.570568 containerd[1476]: time="2025-09-12T17:36:45.570524131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:36:46.187902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463036219.mount: Deactivated successfully. Sep 12 17:36:47.273515 containerd[1476]: time="2025-09-12T17:36:47.272177584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:47.274705 containerd[1476]: time="2025-09-12T17:36:47.274460614Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 17:36:47.276495 containerd[1476]: time="2025-09-12T17:36:47.275197409Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:47.278490 containerd[1476]: time="2025-09-12T17:36:47.278337542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:47.282099 containerd[1476]: time="2025-09-12T17:36:47.282062042Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.711494552s" Sep 12 17:36:47.282188 containerd[1476]: time="2025-09-12T17:36:47.282103629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:36:47.282796 containerd[1476]: time="2025-09-12T17:36:47.282678568Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:36:48.684439 containerd[1476]: time="2025-09-12T17:36:48.683193212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:48.685005 containerd[1476]: time="2025-09-12T17:36:48.684965814Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 17:36:48.685775 containerd[1476]: time="2025-09-12T17:36:48.685748874Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:48.689094 containerd[1476]: time="2025-09-12T17:36:48.689058670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:48.689964 containerd[1476]: time="2025-09-12T17:36:48.689921610Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.407213837s" Sep 12 17:36:48.689964 containerd[1476]: time="2025-09-12T17:36:48.689955922Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:36:48.691597 containerd[1476]: time="2025-09-12T17:36:48.691572762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:36:49.913199 containerd[1476]: time="2025-09-12T17:36:49.913096392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.914607 containerd[1476]: time="2025-09-12T17:36:49.914554103Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 17:36:49.915294 containerd[1476]: time="2025-09-12T17:36:49.915239061Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.917640 containerd[1476]: time="2025-09-12T17:36:49.917590772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:49.918833 containerd[1476]: time="2025-09-12T17:36:49.918714773Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.22702718s" Sep 12 17:36:49.918833 containerd[1476]: time="2025-09-12T17:36:49.918747994Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:36:49.919733 containerd[1476]: time="2025-09-12T17:36:49.919701909Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:36:50.220502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:36:50.235847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:36:50.390997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:36:50.410012 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:36:50.467502 kubelet[1890]: E0912 17:36:50.467172 1890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:36:50.472610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:36:50.472770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:36:51.036636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248538284.mount: Deactivated successfully. Sep 12 17:36:51.578442 containerd[1476]: time="2025-09-12T17:36:51.578123539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:51.579322 containerd[1476]: time="2025-09-12T17:36:51.579196972Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 17:36:51.580119 containerd[1476]: time="2025-09-12T17:36:51.580061663Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:51.582325 containerd[1476]: time="2025-09-12T17:36:51.582259185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:51.583658 containerd[1476]: time="2025-09-12T17:36:51.583620207Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.663815434s" Sep 12 17:36:51.583658 containerd[1476]: time="2025-09-12T17:36:51.583656215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:36:51.584238 containerd[1476]: time="2025-09-12T17:36:51.584201106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:36:51.933511 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 17:36:52.118026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397313675.mount: Deactivated successfully. Sep 12 17:36:53.057065 containerd[1476]: time="2025-09-12T17:36:53.056992877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.059415 containerd[1476]: time="2025-09-12T17:36:53.059343100Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 17:36:53.060683 containerd[1476]: time="2025-09-12T17:36:53.060605720Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.064400 containerd[1476]: time="2025-09-12T17:36:53.064344796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.066501 containerd[1476]: time="2025-09-12T17:36:53.066095128Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.481856221s" Sep 12 17:36:53.066501 containerd[1476]: time="2025-09-12T17:36:53.066139359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:36:53.068297 containerd[1476]: time="2025-09-12T17:36:53.068026701Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:36:53.601517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512501187.mount: Deactivated successfully. Sep 12 17:36:53.605129 containerd[1476]: time="2025-09-12T17:36:53.605085798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.606448 containerd[1476]: time="2025-09-12T17:36:53.606419546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:36:53.607382 containerd[1476]: time="2025-09-12T17:36:53.607352747Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.609673 containerd[1476]: time="2025-09-12T17:36:53.609594170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:53.611001 containerd[1476]: time="2025-09-12T17:36:53.610448084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 542.389686ms" Sep 12 17:36:53.611001 containerd[1476]: time="2025-09-12T17:36:53.610504266Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:36:53.611156 containerd[1476]: time="2025-09-12T17:36:53.611010602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:36:54.198267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968552764.mount: Deactivated successfully. Sep 12 17:36:55.000720 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 17:36:56.025360 containerd[1476]: time="2025-09-12T17:36:56.025304169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:56.027066 containerd[1476]: time="2025-09-12T17:36:56.027016945Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 17:36:56.027584 containerd[1476]: time="2025-09-12T17:36:56.027540893Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:56.031005 containerd[1476]: time="2025-09-12T17:36:56.030935667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:56.032862 containerd[1476]: time="2025-09-12T17:36:56.032534644Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.421495558s" Sep 12 17:36:56.032862 containerd[1476]: time="2025-09-12T17:36:56.032582693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:37:00.723168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:37:00.734711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:00.903748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:00.914113 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:37:00.976509 kubelet[2045]: E0912 17:37:00.976009 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:37:00.979903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:37:00.980263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:37:01.120804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:01.128811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:01.166858 systemd[1]: Reloading requested from client PID 2061 ('systemctl') (unit session-7.scope)... Sep 12 17:37:01.167020 systemd[1]: Reloading... Sep 12 17:37:01.305497 zram_generator::config[2103]: No configuration found. Sep 12 17:37:01.445968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:01.557823 systemd[1]: Reloading finished in 390 ms. Sep 12 17:37:01.631175 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:37:01.631354 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:37:01.631856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:01.639881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:01.802456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:01.815759 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:37:01.890603 kubelet[2154]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:01.890603 kubelet[2154]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:37:01.890603 kubelet[2154]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:01.891268 kubelet[2154]: I0912 17:37:01.890675 2154 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:37:02.136577 kubelet[2154]: I0912 17:37:02.135290 2154 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:37:02.136577 kubelet[2154]: I0912 17:37:02.135335 2154 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:37:02.136577 kubelet[2154]: I0912 17:37:02.135637 2154 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:37:02.166242 kubelet[2154]: I0912 17:37:02.166197 2154 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:37:02.168285 kubelet[2154]: E0912 17:37:02.168241 2154 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://159.223.198.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:37:02.183725 kubelet[2154]: E0912 17:37:02.183675 2154 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:37:02.184194 kubelet[2154]: I0912 17:37:02.183928 2154 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:37:02.189044 kubelet[2154]: I0912 17:37:02.189007 2154 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:37:02.192955 kubelet[2154]: I0912 17:37:02.192850 2154 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:37:02.196519 kubelet[2154]: I0912 17:37:02.193134 2154 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-9-b554e4f7b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:37:02.196519 kubelet[2154]: I0912 17:37:02.196259 2154 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:37:02.196519 kubelet[2154]: I0912 17:37:02.196274 2154 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:37:02.196519 kubelet[2154]: I0912 17:37:02.196434 2154 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:02.199402 kubelet[2154]: I0912 17:37:02.199361 2154 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:37:02.199587 kubelet[2154]: I0912 17:37:02.199574 2154 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:37:02.199885 kubelet[2154]: I0912 17:37:02.199678 2154 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:37:02.199885 kubelet[2154]: I0912 17:37:02.199698 2154 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:37:02.214288 kubelet[2154]: E0912 17:37:02.214224 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.223.198.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-9-b554e4f7b0&limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:37:02.217987 kubelet[2154]: I0912 17:37:02.217944 2154 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:37:02.219172 kubelet[2154]: I0912 17:37:02.219019 2154 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:37:02.220358 kubelet[2154]: E0912 17:37:02.219865 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://159.223.198.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:37:02.220637 kubelet[2154]: W0912 17:37:02.220612 2154 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:37:02.225280 kubelet[2154]: I0912 17:37:02.225248 2154 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:37:02.225593 kubelet[2154]: I0912 17:37:02.225570 2154 server.go:1289] "Started kubelet" Sep 12 17:37:02.228444 kubelet[2154]: I0912 17:37:02.228401 2154 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:37:02.238114 kubelet[2154]: I0912 17:37:02.237927 2154 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:37:02.241172 kubelet[2154]: I0912 17:37:02.241133 2154 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:37:02.248011 kubelet[2154]: E0912 17:37:02.236546 2154 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.198.129:6443/api/v1/namespaces/default/events\": dial tcp 159.223.198.129:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-9-b554e4f7b0.186499958ff12680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-9-b554e4f7b0,UID:ci-4081.3.6-9-b554e4f7b0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-9-b554e4f7b0,},FirstTimestamp:2025-09-12 17:37:02.225458816 +0000 UTC m=+0.400620342,LastTimestamp:2025-09-12 17:37:02.225458816 +0000 UTC m=+0.400620342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-9-b554e4f7b0,}" Sep 12 17:37:02.248430 kubelet[2154]: I0912 17:37:02.248350 2154 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:37:02.248908 kubelet[2154]: I0912 17:37:02.248876 2154 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:37:02.249620 kubelet[2154]: I0912 17:37:02.249593 2154 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:37:02.254940 kubelet[2154]: I0912 17:37:02.254897 2154 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:37:02.255916 kubelet[2154]: E0912 17:37:02.255624 2154 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" Sep 12 17:37:02.260588 kubelet[2154]: I0912 17:37:02.260458 2154 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:37:02.261342 kubelet[2154]: I0912 17:37:02.260787 2154 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:37:02.262741 kubelet[2154]: E0912 17:37:02.262696 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.198.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-b554e4f7b0?timeout=10s\": dial tcp 159.223.198.129:6443: connect: connection refused" interval="200ms" Sep 12 17:37:02.263704 kubelet[2154]: E0912 17:37:02.263666 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.223.198.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:37:02.264346 kubelet[2154]: I0912 17:37:02.264315 2154 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:37:02.264616 kubelet[2154]: I0912 17:37:02.264589 2154 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:37:02.266298 kubelet[2154]: E0912 17:37:02.266270 2154 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:37:02.270405 kubelet[2154]: I0912 17:37:02.270360 2154 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:37:02.288113 kubelet[2154]: I0912 17:37:02.287859 2154 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:37:02.290425 kubelet[2154]: I0912 17:37:02.289955 2154 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:37:02.290425 kubelet[2154]: I0912 17:37:02.289999 2154 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:37:02.290425 kubelet[2154]: I0912 17:37:02.290034 2154 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:37:02.290425 kubelet[2154]: I0912 17:37:02.290047 2154 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:37:02.290425 kubelet[2154]: E0912 17:37:02.290137 2154 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:37:02.303153 kubelet[2154]: E0912 17:37:02.303003 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://159.223.198.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:37:02.305553 kubelet[2154]: I0912 17:37:02.305517 2154 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:37:02.305700 kubelet[2154]: I0912 17:37:02.305687 2154 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:37:02.305818 kubelet[2154]: I0912 17:37:02.305805 2154 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:02.307458 kubelet[2154]: I0912 17:37:02.307432 2154 policy_none.go:49] "None policy: Start" Sep 12 17:37:02.307901 kubelet[2154]: I0912 17:37:02.307636 2154 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:37:02.307901 kubelet[2154]: I0912 17:37:02.307659 2154 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:37:02.314144 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:37:02.329374 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:37:02.335013 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:37:02.349945 kubelet[2154]: E0912 17:37:02.349899 2154 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:37:02.350502 kubelet[2154]: I0912 17:37:02.350147 2154 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:37:02.350502 kubelet[2154]: I0912 17:37:02.350166 2154 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:37:02.354501 kubelet[2154]: I0912 17:37:02.353668 2154 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:37:02.354725 kubelet[2154]: E0912 17:37:02.354670 2154 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:37:02.354789 kubelet[2154]: E0912 17:37:02.354755 2154 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-9-b554e4f7b0\" not found" Sep 12 17:37:02.407833 systemd[1]: Created slice kubepods-burstable-pod1e048530c28b1585cb9b82d601a1cdb3.slice - libcontainer container kubepods-burstable-pod1e048530c28b1585cb9b82d601a1cdb3.slice. Sep 12 17:37:02.424188 kubelet[2154]: E0912 17:37:02.424107 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.428238 systemd[1]: Created slice kubepods-burstable-pod6fdf899d6734ef16b62be5232cd1dc9a.slice - libcontainer container kubepods-burstable-pod6fdf899d6734ef16b62be5232cd1dc9a.slice. Sep 12 17:37:02.433424 kubelet[2154]: E0912 17:37:02.433371 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.437028 systemd[1]: Created slice kubepods-burstable-pod9d2a4bacc03fb567b6c639bc8f86099b.slice - libcontainer container kubepods-burstable-pod9d2a4bacc03fb567b6c639bc8f86099b.slice. Sep 12 17:37:02.440386 kubelet[2154]: E0912 17:37:02.440258 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.452704 kubelet[2154]: I0912 17:37:02.452206 2154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.452704 kubelet[2154]: E0912 17:37:02.452616 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.198.129:6443/api/v1/nodes\": dial tcp 159.223.198.129:6443: connect: connection refused" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.463684 kubelet[2154]: E0912 17:37:02.463611 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.198.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-b554e4f7b0?timeout=10s\": dial tcp 159.223.198.129:6443: connect: connection refused" interval="400ms" Sep 12 17:37:02.562332 kubelet[2154]: I0912 17:37:02.562268 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e048530c28b1585cb9b82d601a1cdb3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" (UID: \"1e048530c28b1585cb9b82d601a1cdb3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.562865 kubelet[2154]: I0912 17:37:02.562589 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e048530c28b1585cb9b82d601a1cdb3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" (UID: \"1e048530c28b1585cb9b82d601a1cdb3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.562865 kubelet[2154]: I0912 17:37:02.562650 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.562865 kubelet[2154]: I0912 17:37:02.562703 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d2a4bacc03fb567b6c639bc8f86099b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-9-b554e4f7b0\" (UID: \"9d2a4bacc03fb567b6c639bc8f86099b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.562865 kubelet[2154]: I0912 17:37:02.562738 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.562865 kubelet[2154]: I0912 17:37:02.562786 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.563228 kubelet[2154]: I0912 17:37:02.562824 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.563375 kubelet[2154]: I0912 17:37:02.562849 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.563375 kubelet[2154]: I0912 17:37:02.563338 2154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e048530c28b1585cb9b82d601a1cdb3-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" (UID: \"1e048530c28b1585cb9b82d601a1cdb3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.654344 kubelet[2154]: I0912 17:37:02.654301 2154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.654846 kubelet[2154]: E0912 17:37:02.654771 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.198.129:6443/api/v1/nodes\": dial tcp 159.223.198.129:6443: connect: connection refused" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:02.725430 kubelet[2154]: E0912 17:37:02.725252 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:02.726880 containerd[1476]: time="2025-09-12T17:37:02.726563928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-9-b554e4f7b0,Uid:1e048530c28b1585cb9b82d601a1cdb3,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:02.728949 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 17:37:02.734127 kubelet[2154]: E0912 17:37:02.734058 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:02.738312 containerd[1476]: time="2025-09-12T17:37:02.738246571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-9-b554e4f7b0,Uid:6fdf899d6734ef16b62be5232cd1dc9a,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:02.741143 kubelet[2154]: E0912 17:37:02.741099 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:02.741999 containerd[1476]: time="2025-09-12T17:37:02.741927576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-9-b554e4f7b0,Uid:9d2a4bacc03fb567b6c639bc8f86099b,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:02.865062 kubelet[2154]: E0912 17:37:02.865005 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.198.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-b554e4f7b0?timeout=10s\": dial tcp 159.223.198.129:6443: connect: connection refused" interval="800ms" Sep 12 17:37:03.021822 kubelet[2154]: E0912 17:37:03.021676 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.223.198.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-9-b554e4f7b0&limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:37:03.056198 kubelet[2154]: I0912 17:37:03.056091 2154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:03.056828 kubelet[2154]: E0912 17:37:03.056774 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.198.129:6443/api/v1/nodes\": dial tcp 159.223.198.129:6443: connect: connection refused" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:03.197438 kubelet[2154]: E0912 17:37:03.196535 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.223.198.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:37:03.208011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577223145.mount: Deactivated successfully. Sep 12 17:37:03.212242 containerd[1476]: time="2025-09-12T17:37:03.212174333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:03.213420 containerd[1476]: time="2025-09-12T17:37:03.213386661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:03.214255 containerd[1476]: time="2025-09-12T17:37:03.214202109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:37:03.214713 containerd[1476]: time="2025-09-12T17:37:03.214623609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:37:03.215094 containerd[1476]: time="2025-09-12T17:37:03.214961993Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:03.215745 containerd[1476]: time="2025-09-12T17:37:03.215701262Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:03.216301 containerd[1476]: time="2025-09-12T17:37:03.216263883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:37:03.219886 containerd[1476]: time="2025-09-12T17:37:03.219757035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:37:03.222033 containerd[1476]: time="2025-09-12T17:37:03.221780493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.741104ms" Sep 12 17:37:03.226335 containerd[1476]: time="2025-09-12T17:37:03.226078671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.421255ms" Sep 12 17:37:03.227921 containerd[1476]: time="2025-09-12T17:37:03.227704737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 489.157972ms" Sep 12 17:37:03.392654 containerd[1476]: time="2025-09-12T17:37:03.391792409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:03.392654 containerd[1476]: time="2025-09-12T17:37:03.391853349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:03.393548 containerd[1476]: time="2025-09-12T17:37:03.393397194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:03.393548 containerd[1476]: time="2025-09-12T17:37:03.393384713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:03.395254 containerd[1476]: time="2025-09-12T17:37:03.394708352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:03.395254 containerd[1476]: time="2025-09-12T17:37:03.394740791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:03.395254 containerd[1476]: time="2025-09-12T17:37:03.394828413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:03.395254 containerd[1476]: time="2025-09-12T17:37:03.394867996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:03.395541 containerd[1476]: time="2025-09-12T17:37:03.395314780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:03.395585 containerd[1476]: time="2025-09-12T17:37:03.395560368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:03.396144 containerd[1476]: time="2025-09-12T17:37:03.396109486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:03.396649 containerd[1476]: time="2025-09-12T17:37:03.396584252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:03.436677 systemd[1]: Started cri-containerd-28773bf9a9ccc2bf87e5222618991636529d96720744c9c82ab4ab531b2401a0.scope - libcontainer container 28773bf9a9ccc2bf87e5222618991636529d96720744c9c82ab4ab531b2401a0. Sep 12 17:37:03.437943 systemd[1]: Started cri-containerd-efc5e3108af1fdfbece5f1725e55a11869fa1084d1aaab659c0c415f57665b5b.scope - libcontainer container efc5e3108af1fdfbece5f1725e55a11869fa1084d1aaab659c0c415f57665b5b. Sep 12 17:37:03.453670 systemd[1]: Started cri-containerd-a9ec904c055feb5760d5da10b8a8971185a89fec90c8d0f95f7789d4e5a06207.scope - libcontainer container a9ec904c055feb5760d5da10b8a8971185a89fec90c8d0f95f7789d4e5a06207. Sep 12 17:37:03.532750 containerd[1476]: time="2025-09-12T17:37:03.532702272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-9-b554e4f7b0,Uid:6fdf899d6734ef16b62be5232cd1dc9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"28773bf9a9ccc2bf87e5222618991636529d96720744c9c82ab4ab531b2401a0\"" Sep 12 17:37:03.536945 kubelet[2154]: E0912 17:37:03.536911 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:03.542041 containerd[1476]: time="2025-09-12T17:37:03.542002884Z" level=info msg="CreateContainer within sandbox \"28773bf9a9ccc2bf87e5222618991636529d96720744c9c82ab4ab531b2401a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:37:03.542459 containerd[1476]: time="2025-09-12T17:37:03.541396505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-9-b554e4f7b0,Uid:1e048530c28b1585cb9b82d601a1cdb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"efc5e3108af1fdfbece5f1725e55a11869fa1084d1aaab659c0c415f57665b5b\"" Sep 12 17:37:03.543172 kubelet[2154]: E0912 17:37:03.543150 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:03.546426 containerd[1476]: time="2025-09-12T17:37:03.546394351Z" level=info msg="CreateContainer within sandbox \"efc5e3108af1fdfbece5f1725e55a11869fa1084d1aaab659c0c415f57665b5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:37:03.563305 containerd[1476]: time="2025-09-12T17:37:03.563250172Z" level=info msg="CreateContainer within sandbox \"28773bf9a9ccc2bf87e5222618991636529d96720744c9c82ab4ab531b2401a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b61dd4aa94d0a2d15dacceca0e81537f1605a82eee3ad2515e38d33af094c7e\"" Sep 12 17:37:03.564650 containerd[1476]: time="2025-09-12T17:37:03.564612110Z" level=info msg="StartContainer for \"6b61dd4aa94d0a2d15dacceca0e81537f1605a82eee3ad2515e38d33af094c7e\"" Sep 12 17:37:03.566985 containerd[1476]: time="2025-09-12T17:37:03.566910735Z" level=info msg="CreateContainer within sandbox \"efc5e3108af1fdfbece5f1725e55a11869fa1084d1aaab659c0c415f57665b5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eedce8181a3e89308f5e5437a827984b6992b54f5f8678b8e4763401e53b81fb\"" Sep 12 17:37:03.567695 containerd[1476]: time="2025-09-12T17:37:03.567568434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-9-b554e4f7b0,Uid:9d2a4bacc03fb567b6c639bc8f86099b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9ec904c055feb5760d5da10b8a8971185a89fec90c8d0f95f7789d4e5a06207\"" Sep 12 17:37:03.567968 containerd[1476]: time="2025-09-12T17:37:03.567949141Z" level=info msg="StartContainer for \"eedce8181a3e89308f5e5437a827984b6992b54f5f8678b8e4763401e53b81fb\"" Sep 12 17:37:03.569804 kubelet[2154]: E0912 17:37:03.569760 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:03.573098 containerd[1476]: time="2025-09-12T17:37:03.573000280Z" level=info msg="CreateContainer within sandbox \"a9ec904c055feb5760d5da10b8a8971185a89fec90c8d0f95f7789d4e5a06207\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:37:03.582843 containerd[1476]: time="2025-09-12T17:37:03.582779377Z" level=info msg="CreateContainer within sandbox \"a9ec904c055feb5760d5da10b8a8971185a89fec90c8d0f95f7789d4e5a06207\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3f40d12513ebba24e03866e2fb27ce223f1895a1abec3af440d52db3fd7a60a\"" Sep 12 17:37:03.583930 containerd[1476]: time="2025-09-12T17:37:03.583813091Z" level=info msg="StartContainer for \"e3f40d12513ebba24e03866e2fb27ce223f1895a1abec3af440d52db3fd7a60a\"" Sep 12 17:37:03.592359 kubelet[2154]: E0912 17:37:03.592233 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://159.223.198.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:37:03.608722 systemd[1]: Started cri-containerd-6b61dd4aa94d0a2d15dacceca0e81537f1605a82eee3ad2515e38d33af094c7e.scope - libcontainer container 6b61dd4aa94d0a2d15dacceca0e81537f1605a82eee3ad2515e38d33af094c7e. Sep 12 17:37:03.624352 systemd[1]: Started cri-containerd-eedce8181a3e89308f5e5437a827984b6992b54f5f8678b8e4763401e53b81fb.scope - libcontainer container eedce8181a3e89308f5e5437a827984b6992b54f5f8678b8e4763401e53b81fb. Sep 12 17:37:03.657764 systemd[1]: Started cri-containerd-e3f40d12513ebba24e03866e2fb27ce223f1895a1abec3af440d52db3fd7a60a.scope - libcontainer container e3f40d12513ebba24e03866e2fb27ce223f1895a1abec3af440d52db3fd7a60a. Sep 12 17:37:03.666608 kubelet[2154]: E0912 17:37:03.666556 2154 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.198.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-b554e4f7b0?timeout=10s\": dial tcp 159.223.198.129:6443: connect: connection refused" interval="1.6s" Sep 12 17:37:03.705124 containerd[1476]: time="2025-09-12T17:37:03.704887212Z" level=info msg="StartContainer for \"6b61dd4aa94d0a2d15dacceca0e81537f1605a82eee3ad2515e38d33af094c7e\" returns successfully" Sep 12 17:37:03.749547 containerd[1476]: time="2025-09-12T17:37:03.749271954Z" level=info msg="StartContainer for \"eedce8181a3e89308f5e5437a827984b6992b54f5f8678b8e4763401e53b81fb\" returns successfully" Sep 12 17:37:03.765130 containerd[1476]: time="2025-09-12T17:37:03.764982698Z" level=info msg="StartContainer for \"e3f40d12513ebba24e03866e2fb27ce223f1895a1abec3af440d52db3fd7a60a\" returns successfully" Sep 12 17:37:03.825236 kubelet[2154]: E0912 17:37:03.825171 2154 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://159.223.198.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.198.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:37:03.858549 kubelet[2154]: I0912 17:37:03.858503 2154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:03.858905 kubelet[2154]: E0912 17:37:03.858828 2154 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.223.198.129:6443/api/v1/nodes\": dial tcp 159.223.198.129:6443: connect: connection refused" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:04.314004 kubelet[2154]: E0912 17:37:04.313737 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:04.314004 kubelet[2154]: E0912 17:37:04.313869 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:04.318550 kubelet[2154]: E0912 17:37:04.318309 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:04.318550 kubelet[2154]: E0912 17:37:04.318442 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:04.320908 kubelet[2154]: E0912 17:37:04.320827 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:04.321286 kubelet[2154]: E0912 17:37:04.321212 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:05.325361 kubelet[2154]: E0912 17:37:05.325069 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:05.325361 kubelet[2154]: E0912 17:37:05.325239 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:05.326422 kubelet[2154]: E0912 17:37:05.326205 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:05.326422 kubelet[2154]: E0912 17:37:05.326357 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:05.460379 kubelet[2154]: I0912 17:37:05.459976 2154 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.327503 kubelet[2154]: E0912 17:37:06.327277 2154 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.327503 kubelet[2154]: E0912 17:37:06.327419 2154 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:06.484970 kubelet[2154]: E0912 17:37:06.484880 2154 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-9-b554e4f7b0\" not found" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.616721 kubelet[2154]: I0912 17:37:06.616250 2154 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.616721 kubelet[2154]: E0912 17:37:06.616288 2154 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-9-b554e4f7b0\": node \"ci-4081.3.6-9-b554e4f7b0\" not found" Sep 12 17:37:06.657734 kubelet[2154]: I0912 17:37:06.657687 2154 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.683752 kubelet[2154]: E0912 17:37:06.683705 2154 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.683752 kubelet[2154]: I0912 17:37:06.683745 2154 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.695501 kubelet[2154]: E0912 17:37:06.695000 2154 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.695501 kubelet[2154]: I0912 17:37:06.695054 2154 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:06.700734 kubelet[2154]: E0912 17:37:06.700685 2154 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-9-b554e4f7b0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:07.218130 kubelet[2154]: I0912 17:37:07.217705 2154 apiserver.go:52] "Watching apiserver" Sep 12 17:37:07.261355 kubelet[2154]: I0912 17:37:07.261284 2154 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:37:08.781145 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-7.scope)... Sep 12 17:37:08.781166 systemd[1]: Reloading... Sep 12 17:37:08.891648 zram_generator::config[2486]: No configuration found. Sep 12 17:37:09.020519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:37:09.118921 systemd[1]: Reloading finished in 337 ms. Sep 12 17:37:09.176510 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:09.191282 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:37:09.193002 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:09.200774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:37:09.345183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:37:09.361977 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:37:09.431603 kubelet[2530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:09.431603 kubelet[2530]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:37:09.431603 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:37:09.431603 kubelet[2530]: I0912 17:37:09.429903 2530 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:37:09.444714 kubelet[2530]: I0912 17:37:09.444666 2530 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:37:09.444930 kubelet[2530]: I0912 17:37:09.444914 2530 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:37:09.445374 kubelet[2530]: I0912 17:37:09.445351 2530 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:37:09.447442 kubelet[2530]: I0912 17:37:09.447420 2530 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:37:09.453020 kubelet[2530]: I0912 17:37:09.452990 2530 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:37:09.457787 kubelet[2530]: E0912 17:37:09.457745 2530 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:37:09.457944 kubelet[2530]: I0912 17:37:09.457933 2530 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:37:09.461106 kubelet[2530]: I0912 17:37:09.461074 2530 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:37:09.461547 kubelet[2530]: I0912 17:37:09.461373 2530 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:37:09.461636 kubelet[2530]: I0912 17:37:09.461398 2530 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-9-b554e4f7b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:37:09.461723 kubelet[2530]: I0912 17:37:09.461644 2530 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:37:09.461723 kubelet[2530]: I0912 17:37:09.461656 2530 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:37:09.462619 kubelet[2530]: I0912 17:37:09.462596 2530 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:09.462815 kubelet[2530]: I0912 17:37:09.462802 2530 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:37:09.462849 kubelet[2530]: I0912 17:37:09.462821 2530 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:37:09.462849 kubelet[2530]: I0912 17:37:09.462845 2530 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:37:09.462898 kubelet[2530]: I0912 17:37:09.462861 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:37:09.467961 kubelet[2530]: I0912 17:37:09.467896 2530 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:37:09.468771 kubelet[2530]: I0912 17:37:09.468640 2530 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:37:09.472501 kubelet[2530]: I0912 17:37:09.471942 2530 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:37:09.472501 kubelet[2530]: I0912 17:37:09.471984 2530 server.go:1289] "Started kubelet" Sep 12 17:37:09.475767 kubelet[2530]: I0912 17:37:09.475733 2530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:37:09.495533 kubelet[2530]: I0912 17:37:09.495431 2530 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:37:09.498684 kubelet[2530]: I0912 17:37:09.498667 2530 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:37:09.504663 kubelet[2530]: I0912 17:37:09.504328 2530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:37:09.504663 kubelet[2530]: I0912 17:37:09.504519 2530 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:37:09.504951 kubelet[2530]: I0912 17:37:09.504937 2530 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:37:09.506494 kubelet[2530]: I0912 17:37:09.505233 2530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:37:09.508022 kubelet[2530]: I0912 17:37:09.507989 2530 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:37:09.508137 kubelet[2530]: I0912 17:37:09.508106 2530 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:37:09.508418 kubelet[2530]: I0912 17:37:09.508400 2530 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:37:09.508918 kubelet[2530]: I0912 17:37:09.508897 2530 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:37:09.511579 kubelet[2530]: I0912 17:37:09.509875 2530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:37:09.511958 kubelet[2530]: E0912 17:37:09.510088 2530 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:37:09.512310 kubelet[2530]: I0912 17:37:09.511526 2530 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:37:09.515129 kubelet[2530]: I0912 17:37:09.515095 2530 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:37:09.515129 kubelet[2530]: I0912 17:37:09.515126 2530 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:37:09.515248 kubelet[2530]: I0912 17:37:09.515147 2530 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:37:09.515352 kubelet[2530]: I0912 17:37:09.515319 2530 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:37:09.515512 kubelet[2530]: E0912 17:37:09.515492 2530 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:37:09.574204 kubelet[2530]: I0912 17:37:09.574164 2530 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:37:09.574204 kubelet[2530]: I0912 17:37:09.574185 2530 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:37:09.574204 kubelet[2530]: I0912 17:37:09.574207 2530 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:37:09.574395 kubelet[2530]: I0912 17:37:09.574353 2530 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:37:09.574395 kubelet[2530]: I0912 17:37:09.574362 2530 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:37:09.574395 kubelet[2530]: I0912 17:37:09.574378 2530 policy_none.go:49] "None policy: Start" Sep 12 17:37:09.574395 kubelet[2530]: I0912 17:37:09.574388 2530 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:37:09.574395 kubelet[2530]: I0912 17:37:09.574398 2530 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:37:09.574758 kubelet[2530]: I0912 17:37:09.574743 2530 state_mem.go:75] "Updated machine memory state" Sep 12 17:37:09.579770 kubelet[2530]: E0912 17:37:09.579732 2530 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:37:09.581853 kubelet[2530]: I0912 17:37:09.580820 2530 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:37:09.581853 kubelet[2530]: I0912 17:37:09.580838 2530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:37:09.581853 kubelet[2530]: I0912 17:37:09.581150 2530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:37:09.587435 kubelet[2530]: E0912 17:37:09.587389 2530 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:37:09.616736 kubelet[2530]: I0912 17:37:09.616685 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.617530 kubelet[2530]: I0912 17:37:09.617394 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.617870 kubelet[2530]: I0912 17:37:09.617845 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.626880 kubelet[2530]: I0912 17:37:09.626832 2530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:37:09.627162 kubelet[2530]: I0912 17:37:09.627094 2530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:37:09.627224 kubelet[2530]: I0912 17:37:09.627211 2530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:37:09.687949 kubelet[2530]: I0912 17:37:09.687580 2530 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.697382 kubelet[2530]: I0912 17:37:09.697293 2530 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.697382 kubelet[2530]: I0912 17:37:09.697378 2530 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.808871 kubelet[2530]: I0912 17:37:09.808781 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e048530c28b1585cb9b82d601a1cdb3-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" (UID: \"1e048530c28b1585cb9b82d601a1cdb3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.808871 kubelet[2530]: I0912 17:37:09.808846 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e048530c28b1585cb9b82d601a1cdb3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" (UID: \"1e048530c28b1585cb9b82d601a1cdb3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.808871 kubelet[2530]: I0912 17:37:09.808870 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.809111 kubelet[2530]: I0912 17:37:09.808941 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.809111 kubelet[2530]: I0912 17:37:09.808959 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.809111 kubelet[2530]: I0912 17:37:09.809007 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.809111 kubelet[2530]: I0912 17:37:09.809024 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d2a4bacc03fb567b6c639bc8f86099b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-9-b554e4f7b0\" (UID: \"9d2a4bacc03fb567b6c639bc8f86099b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.809302 kubelet[2530]: I0912 17:37:09.809114 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e048530c28b1585cb9b82d601a1cdb3-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" (UID: \"1e048530c28b1585cb9b82d601a1cdb3\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.809302 kubelet[2530]: I0912 17:37:09.809131 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fdf899d6734ef16b62be5232cd1dc9a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-b554e4f7b0\" (UID: \"6fdf899d6734ef16b62be5232cd1dc9a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:09.929067 kubelet[2530]: E0912 17:37:09.928092 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:09.930056 kubelet[2530]: E0912 17:37:09.930001 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:09.930702 kubelet[2530]: E0912 17:37:09.930598 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:10.473802 kubelet[2530]: I0912 17:37:10.473763 2530 apiserver.go:52] "Watching apiserver" Sep 12 17:37:10.508581 kubelet[2530]: I0912 17:37:10.508527 2530 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:37:10.554550 kubelet[2530]: E0912 17:37:10.554028 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:10.555499 kubelet[2530]: I0912 17:37:10.555464 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:10.557507 kubelet[2530]: I0912 17:37:10.557315 2530 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:10.570635 kubelet[2530]: I0912 17:37:10.570606 2530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:37:10.570837 kubelet[2530]: E0912 17:37:10.570822 2530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-9-b554e4f7b0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:10.571093 kubelet[2530]: E0912 17:37:10.571075 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:10.572938 kubelet[2530]: I0912 17:37:10.572679 2530 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:37:10.572938 kubelet[2530]: E0912 17:37:10.572725 2530 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-9-b554e4f7b0\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:10.572938 kubelet[2530]: E0912 17:37:10.572866 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:10.600009 kubelet[2530]: I0912 17:37:10.599953 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-9-b554e4f7b0" podStartSLOduration=1.59993501 podStartE2EDuration="1.59993501s" podCreationTimestamp="2025-09-12 17:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:10.588082126 +0000 UTC m=+1.217760322" watchObservedRunningTime="2025-09-12 17:37:10.59993501 +0000 UTC m=+1.229613259" Sep 12 17:37:10.612660 kubelet[2530]: I0912 17:37:10.612426 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-9-b554e4f7b0" podStartSLOduration=1.612382191 podStartE2EDuration="1.612382191s" podCreationTimestamp="2025-09-12 17:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:10.600480975 +0000 UTC m=+1.230159149" watchObservedRunningTime="2025-09-12 17:37:10.612382191 +0000 UTC m=+1.242060378" Sep 12 17:37:10.612660 kubelet[2530]: I0912 17:37:10.612583 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-9-b554e4f7b0" podStartSLOduration=1.612576252 podStartE2EDuration="1.612576252s" podCreationTimestamp="2025-09-12 17:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:10.610180429 +0000 UTC m=+1.239858624" watchObservedRunningTime="2025-09-12 17:37:10.612576252 +0000 UTC m=+1.242254444" Sep 12 17:37:11.556898 kubelet[2530]: E0912 17:37:11.556740 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:11.556898 kubelet[2530]: E0912 17:37:11.556754 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:12.803537 kubelet[2530]: E0912 17:37:12.802366 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:13.910299 kubelet[2530]: I0912 17:37:13.910220 2530 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:37:13.912668 containerd[1476]: time="2025-09-12T17:37:13.912615029Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:37:13.914382 kubelet[2530]: I0912 17:37:13.913419 2530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:37:14.875200 systemd[1]: Created slice kubepods-besteffort-podd7d2c7c2_9fb6_4961_8941_9f3988dc5d0d.slice - libcontainer container kubepods-besteffort-podd7d2c7c2_9fb6_4961_8941_9f3988dc5d0d.slice. Sep 12 17:37:14.944795 kubelet[2530]: I0912 17:37:14.944753 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d-xtables-lock\") pod \"kube-proxy-pf6jk\" (UID: \"d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d\") " pod="kube-system/kube-proxy-pf6jk" Sep 12 17:37:14.946001 kubelet[2530]: I0912 17:37:14.945765 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d-kube-proxy\") pod \"kube-proxy-pf6jk\" (UID: \"d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d\") " pod="kube-system/kube-proxy-pf6jk" Sep 12 17:37:14.946001 kubelet[2530]: I0912 17:37:14.945807 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d-lib-modules\") pod \"kube-proxy-pf6jk\" (UID: \"d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d\") " pod="kube-system/kube-proxy-pf6jk" Sep 12 17:37:14.946001 kubelet[2530]: I0912 17:37:14.945829 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2sjx\" (UniqueName: \"kubernetes.io/projected/d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d-kube-api-access-n2sjx\") pod \"kube-proxy-pf6jk\" (UID: \"d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d\") " pod="kube-system/kube-proxy-pf6jk" Sep 12 17:37:15.029077 systemd-timesyncd[1338]: Contacted time server 74.208.25.46:123 (2.flatcar.pool.ntp.org). Sep 12 17:37:15.029171 systemd-timesyncd[1338]: Initial clock synchronization to Fri 2025-09-12 17:37:14.824500 UTC. Sep 12 17:37:15.146071 systemd[1]: Created slice kubepods-besteffort-podaacdd2ab_240a_49f5_ac30_ba7cb5cf2dd3.slice - libcontainer container kubepods-besteffort-podaacdd2ab_240a_49f5_ac30_ba7cb5cf2dd3.slice. Sep 12 17:37:15.187261 kubelet[2530]: E0912 17:37:15.186839 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:15.188239 containerd[1476]: time="2025-09-12T17:37:15.187822488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pf6jk,Uid:d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:15.217869 containerd[1476]: time="2025-09-12T17:37:15.217742255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:15.217869 containerd[1476]: time="2025-09-12T17:37:15.217803318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:15.217869 containerd[1476]: time="2025-09-12T17:37:15.217823754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:15.218437 containerd[1476]: time="2025-09-12T17:37:15.218218992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:15.246575 kubelet[2530]: I0912 17:37:15.246430 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aacdd2ab-240a-49f5-ac30-ba7cb5cf2dd3-var-lib-calico\") pod \"tigera-operator-755d956888-n2m7d\" (UID: \"aacdd2ab-240a-49f5-ac30-ba7cb5cf2dd3\") " pod="tigera-operator/tigera-operator-755d956888-n2m7d" Sep 12 17:37:15.246575 kubelet[2530]: I0912 17:37:15.246497 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vj6\" (UniqueName: \"kubernetes.io/projected/aacdd2ab-240a-49f5-ac30-ba7cb5cf2dd3-kube-api-access-s8vj6\") pod \"tigera-operator-755d956888-n2m7d\" (UID: \"aacdd2ab-240a-49f5-ac30-ba7cb5cf2dd3\") " pod="tigera-operator/tigera-operator-755d956888-n2m7d" Sep 12 17:37:15.249750 systemd[1]: Started cri-containerd-37c9a56ad853587b76429b1e11ec49471526d8f46b315343a596ecf93ba7bd3c.scope - libcontainer container 37c9a56ad853587b76429b1e11ec49471526d8f46b315343a596ecf93ba7bd3c. Sep 12 17:37:15.276160 containerd[1476]: time="2025-09-12T17:37:15.276116507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pf6jk,Uid:d7d2c7c2-9fb6-4961-8941-9f3988dc5d0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"37c9a56ad853587b76429b1e11ec49471526d8f46b315343a596ecf93ba7bd3c\"" Sep 12 17:37:15.277781 kubelet[2530]: E0912 17:37:15.277749 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:15.286606 containerd[1476]: time="2025-09-12T17:37:15.286302549Z" level=info msg="CreateContainer within sandbox \"37c9a56ad853587b76429b1e11ec49471526d8f46b315343a596ecf93ba7bd3c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:37:15.303826 containerd[1476]: time="2025-09-12T17:37:15.303768573Z" level=info msg="CreateContainer within sandbox \"37c9a56ad853587b76429b1e11ec49471526d8f46b315343a596ecf93ba7bd3c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14de333fb633b27db37f174dcad34c809e0b96bac0e32a9912791f4d166c9eb8\"" Sep 12 17:37:15.305386 containerd[1476]: time="2025-09-12T17:37:15.305353375Z" level=info msg="StartContainer for \"14de333fb633b27db37f174dcad34c809e0b96bac0e32a9912791f4d166c9eb8\"" Sep 12 17:37:15.346761 systemd[1]: Started cri-containerd-14de333fb633b27db37f174dcad34c809e0b96bac0e32a9912791f4d166c9eb8.scope - libcontainer container 14de333fb633b27db37f174dcad34c809e0b96bac0e32a9912791f4d166c9eb8. Sep 12 17:37:15.382362 containerd[1476]: time="2025-09-12T17:37:15.382315730Z" level=info msg="StartContainer for \"14de333fb633b27db37f174dcad34c809e0b96bac0e32a9912791f4d166c9eb8\" returns successfully" Sep 12 17:37:15.450154 containerd[1476]: time="2025-09-12T17:37:15.449798790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-n2m7d,Uid:aacdd2ab-240a-49f5-ac30-ba7cb5cf2dd3,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:37:15.481126 containerd[1476]: time="2025-09-12T17:37:15.474837954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:15.481126 containerd[1476]: time="2025-09-12T17:37:15.474932918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:15.481126 containerd[1476]: time="2025-09-12T17:37:15.474948696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:15.481126 containerd[1476]: time="2025-09-12T17:37:15.475107203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:15.503709 systemd[1]: Started cri-containerd-870f51f993f4d257507305268f007ddb62d41929b9ec4bf2b72f7430b88a9b04.scope - libcontainer container 870f51f993f4d257507305268f007ddb62d41929b9ec4bf2b72f7430b88a9b04. Sep 12 17:37:15.569063 kubelet[2530]: E0912 17:37:15.569032 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:15.577217 containerd[1476]: time="2025-09-12T17:37:15.577135585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-n2m7d,Uid:aacdd2ab-240a-49f5-ac30-ba7cb5cf2dd3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"870f51f993f4d257507305268f007ddb62d41929b9ec4bf2b72f7430b88a9b04\"" Sep 12 17:37:15.580705 containerd[1476]: time="2025-09-12T17:37:15.580663291Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:37:17.483248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197559800.mount: Deactivated successfully. Sep 12 17:37:17.541382 kubelet[2530]: E0912 17:37:17.541343 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:17.561870 kubelet[2530]: I0912 17:37:17.561255 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pf6jk" podStartSLOduration=3.561231059 podStartE2EDuration="3.561231059s" podCreationTimestamp="2025-09-12 17:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:15.591130524 +0000 UTC m=+6.220808723" watchObservedRunningTime="2025-09-12 17:37:17.561231059 +0000 UTC m=+8.190909248" Sep 12 17:37:17.573749 kubelet[2530]: E0912 17:37:17.573705 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:17.677582 kubelet[2530]: E0912 17:37:17.677549 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:18.576696 kubelet[2530]: E0912 17:37:18.576360 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:18.828529 containerd[1476]: time="2025-09-12T17:37:18.828363758Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:18.829875 containerd[1476]: time="2025-09-12T17:37:18.829588550Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:37:18.830395 containerd[1476]: time="2025-09-12T17:37:18.830368704Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:18.832715 containerd[1476]: time="2025-09-12T17:37:18.832686425Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:18.833706 containerd[1476]: time="2025-09-12T17:37:18.833675231Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.252963039s" Sep 12 17:37:18.833916 containerd[1476]: time="2025-09-12T17:37:18.833806422Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:37:18.839028 containerd[1476]: time="2025-09-12T17:37:18.838927130Z" level=info msg="CreateContainer within sandbox \"870f51f993f4d257507305268f007ddb62d41929b9ec4bf2b72f7430b88a9b04\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:37:18.853956 containerd[1476]: time="2025-09-12T17:37:18.853821761Z" level=info msg="CreateContainer within sandbox \"870f51f993f4d257507305268f007ddb62d41929b9ec4bf2b72f7430b88a9b04\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bc1985ab27bd79e80e60286b32035b8d1f60733ca782270259585768f582a5d4\"" Sep 12 17:37:18.854738 containerd[1476]: time="2025-09-12T17:37:18.854653935Z" level=info msg="StartContainer for \"bc1985ab27bd79e80e60286b32035b8d1f60733ca782270259585768f582a5d4\"" Sep 12 17:37:18.901710 systemd[1]: Started cri-containerd-bc1985ab27bd79e80e60286b32035b8d1f60733ca782270259585768f582a5d4.scope - libcontainer container bc1985ab27bd79e80e60286b32035b8d1f60733ca782270259585768f582a5d4. Sep 12 17:37:18.929724 containerd[1476]: time="2025-09-12T17:37:18.929671789Z" level=info msg="StartContainer for \"bc1985ab27bd79e80e60286b32035b8d1f60733ca782270259585768f582a5d4\" returns successfully" Sep 12 17:37:22.651557 update_engine[1448]: I20250912 17:37:22.650712 1448 update_attempter.cc:509] Updating boot flags... Sep 12 17:37:22.706638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2902) Sep 12 17:37:22.829901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2903) Sep 12 17:37:22.879503 kubelet[2530]: E0912 17:37:22.878393 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:22.919593 kubelet[2530]: I0912 17:37:22.919299 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-n2m7d" podStartSLOduration=4.663864985 podStartE2EDuration="7.919271604s" podCreationTimestamp="2025-09-12 17:37:15 +0000 UTC" firstStartedPulling="2025-09-12 17:37:15.579702926 +0000 UTC m=+6.209381116" lastFinishedPulling="2025-09-12 17:37:18.835109559 +0000 UTC m=+9.464787735" observedRunningTime="2025-09-12 17:37:19.594064451 +0000 UTC m=+10.223742642" watchObservedRunningTime="2025-09-12 17:37:22.919271604 +0000 UTC m=+13.548949800" Sep 12 17:37:23.599426 kubelet[2530]: E0912 17:37:23.598873 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:23.716132 sudo[1653]: pam_unix(sudo:session): session closed for user root Sep 12 17:37:23.723941 sshd[1650]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:23.729332 systemd[1]: sshd@6-159.223.198.129:22-147.75.109.163:60744.service: Deactivated successfully. Sep 12 17:37:23.733571 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:37:23.734537 systemd[1]: session-7.scope: Consumed 7.297s CPU time, 144.6M memory peak, 0B memory swap peak. Sep 12 17:37:23.736563 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:37:23.739132 systemd-logind[1447]: Removed session 7. Sep 12 17:37:27.680291 systemd[1]: Created slice kubepods-besteffort-pod68fface4_585d_4567_a2f9_d2ddc879bab6.slice - libcontainer container kubepods-besteffort-pod68fface4_585d_4567_a2f9_d2ddc879bab6.slice. Sep 12 17:37:27.740961 kubelet[2530]: I0912 17:37:27.740664 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4jb\" (UniqueName: \"kubernetes.io/projected/68fface4-585d-4567-a2f9-d2ddc879bab6-kube-api-access-4n4jb\") pod \"calico-typha-f99d69f9d-76nl9\" (UID: \"68fface4-585d-4567-a2f9-d2ddc879bab6\") " pod="calico-system/calico-typha-f99d69f9d-76nl9" Sep 12 17:37:27.740961 kubelet[2530]: I0912 17:37:27.740769 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68fface4-585d-4567-a2f9-d2ddc879bab6-tigera-ca-bundle\") pod \"calico-typha-f99d69f9d-76nl9\" (UID: \"68fface4-585d-4567-a2f9-d2ddc879bab6\") " pod="calico-system/calico-typha-f99d69f9d-76nl9" Sep 12 17:37:27.740961 kubelet[2530]: I0912 17:37:27.740803 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/68fface4-585d-4567-a2f9-d2ddc879bab6-typha-certs\") pod \"calico-typha-f99d69f9d-76nl9\" (UID: \"68fface4-585d-4567-a2f9-d2ddc879bab6\") " pod="calico-system/calico-typha-f99d69f9d-76nl9" Sep 12 17:37:27.999960 systemd[1]: Created slice kubepods-besteffort-poddf553e0a_7c5f_43ce_81b0_a8bc33866f8b.slice - libcontainer container kubepods-besteffort-poddf553e0a_7c5f_43ce_81b0_a8bc33866f8b.slice. Sep 12 17:37:28.003397 kubelet[2530]: E0912 17:37:28.003364 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:28.005328 containerd[1476]: time="2025-09-12T17:37:28.005162417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f99d69f9d-76nl9,Uid:68fface4-585d-4567-a2f9-d2ddc879bab6,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:28.044886 kubelet[2530]: I0912 17:37:28.044501 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-lib-modules\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.044886 kubelet[2530]: I0912 17:37:28.044544 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-node-certs\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.044886 kubelet[2530]: I0912 17:37:28.044564 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-policysync\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.044886 kubelet[2530]: I0912 17:37:28.044597 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-var-lib-calico\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.044886 kubelet[2530]: I0912 17:37:28.044615 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvl7\" (UniqueName: \"kubernetes.io/projected/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-kube-api-access-brvl7\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045261 kubelet[2530]: I0912 17:37:28.044633 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-cni-log-dir\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045261 kubelet[2530]: I0912 17:37:28.044652 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-xtables-lock\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045261 kubelet[2530]: I0912 17:37:28.044669 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-flexvol-driver-host\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045261 kubelet[2530]: I0912 17:37:28.044689 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-var-run-calico\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045261 kubelet[2530]: I0912 17:37:28.044707 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-cni-net-dir\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045500 kubelet[2530]: I0912 17:37:28.044724 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-cni-bin-dir\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.045500 kubelet[2530]: I0912 17:37:28.044740 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df553e0a-7c5f-43ce-81b0-a8bc33866f8b-tigera-ca-bundle\") pod \"calico-node-f6jlw\" (UID: \"df553e0a-7c5f-43ce-81b0-a8bc33866f8b\") " pod="calico-system/calico-node-f6jlw" Sep 12 17:37:28.056252 containerd[1476]: time="2025-09-12T17:37:28.056098411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:28.056252 containerd[1476]: time="2025-09-12T17:37:28.056172274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:28.056252 containerd[1476]: time="2025-09-12T17:37:28.056208397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:28.056649 containerd[1476]: time="2025-09-12T17:37:28.056340502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:28.098689 systemd[1]: Started cri-containerd-355864089b7b89f965dba8f97428a5852bfabd46fb31e89bf4f270f069d78cd2.scope - libcontainer container 355864089b7b89f965dba8f97428a5852bfabd46fb31e89bf4f270f069d78cd2. Sep 12 17:37:28.157670 containerd[1476]: time="2025-09-12T17:37:28.157633940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f99d69f9d-76nl9,Uid:68fface4-585d-4567-a2f9-d2ddc879bab6,Namespace:calico-system,Attempt:0,} returns sandbox id \"355864089b7b89f965dba8f97428a5852bfabd46fb31e89bf4f270f069d78cd2\"" Sep 12 17:37:28.160040 kubelet[2530]: E0912 17:37:28.160008 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.160040 kubelet[2530]: W0912 17:37:28.160033 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.161289 kubelet[2530]: E0912 17:37:28.161259 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.162527 kubelet[2530]: E0912 17:37:28.162377 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:28.163911 containerd[1476]: time="2025-09-12T17:37:28.163879544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:37:28.179855 kubelet[2530]: E0912 17:37:28.179690 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.179855 kubelet[2530]: W0912 17:37:28.179712 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.179855 kubelet[2530]: E0912 17:37:28.179736 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.306929 containerd[1476]: time="2025-09-12T17:37:28.306809712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6jlw,Uid:df553e0a-7c5f-43ce-81b0-a8bc33866f8b,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:28.328379 kubelet[2530]: E0912 17:37:28.328084 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pxnxg" podUID="89a00481-6e24-49f9-825b-7014149c8b95" Sep 12 17:37:28.331254 kubelet[2530]: E0912 17:37:28.331204 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.331254 kubelet[2530]: W0912 17:37:28.331231 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.331254 kubelet[2530]: E0912 17:37:28.331254 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.333194 kubelet[2530]: E0912 17:37:28.332556 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.333194 kubelet[2530]: W0912 17:37:28.332577 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.333194 kubelet[2530]: E0912 17:37:28.332600 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.333194 kubelet[2530]: E0912 17:37:28.332874 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.333194 kubelet[2530]: W0912 17:37:28.332884 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.333194 kubelet[2530]: E0912 17:37:28.332897 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.333194 kubelet[2530]: E0912 17:37:28.333171 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.333665 kubelet[2530]: W0912 17:37:28.333288 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.333665 kubelet[2530]: E0912 17:37:28.333306 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.335630 kubelet[2530]: E0912 17:37:28.334663 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.335630 kubelet[2530]: W0912 17:37:28.334678 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.335630 kubelet[2530]: E0912 17:37:28.334691 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.335917 kubelet[2530]: E0912 17:37:28.335901 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.335917 kubelet[2530]: W0912 17:37:28.335917 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.336276 kubelet[2530]: E0912 17:37:28.335930 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.336748 kubelet[2530]: E0912 17:37:28.336456 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.336748 kubelet[2530]: W0912 17:37:28.336480 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.336748 kubelet[2530]: E0912 17:37:28.336493 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.337040 kubelet[2530]: E0912 17:37:28.336911 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.337040 kubelet[2530]: W0912 17:37:28.336923 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.337040 kubelet[2530]: E0912 17:37:28.336934 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.337610 kubelet[2530]: E0912 17:37:28.337406 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.337610 kubelet[2530]: W0912 17:37:28.337417 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.337883 kubelet[2530]: E0912 17:37:28.337816 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.338335 kubelet[2530]: E0912 17:37:28.338123 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.338335 kubelet[2530]: W0912 17:37:28.338138 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.338335 kubelet[2530]: E0912 17:37:28.338148 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.338507 kubelet[2530]: E0912 17:37:28.338349 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.338507 kubelet[2530]: W0912 17:37:28.338357 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.338507 kubelet[2530]: E0912 17:37:28.338369 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.340081 kubelet[2530]: E0912 17:37:28.339684 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.340081 kubelet[2530]: W0912 17:37:28.339699 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.340081 kubelet[2530]: E0912 17:37:28.339710 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.340950 kubelet[2530]: E0912 17:37:28.340628 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.340950 kubelet[2530]: W0912 17:37:28.340642 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.340950 kubelet[2530]: E0912 17:37:28.340653 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.340950 kubelet[2530]: E0912 17:37:28.340918 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.340950 kubelet[2530]: W0912 17:37:28.340927 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.340950 kubelet[2530]: E0912 17:37:28.340937 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.344296 kubelet[2530]: E0912 17:37:28.342565 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.344296 kubelet[2530]: W0912 17:37:28.342582 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.344296 kubelet[2530]: E0912 17:37:28.342593 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.344645 kubelet[2530]: E0912 17:37:28.344624 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.344645 kubelet[2530]: W0912 17:37:28.344640 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.344721 kubelet[2530]: E0912 17:37:28.344653 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.345289 kubelet[2530]: E0912 17:37:28.345183 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.345289 kubelet[2530]: W0912 17:37:28.345197 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.345289 kubelet[2530]: E0912 17:37:28.345208 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.345928 kubelet[2530]: E0912 17:37:28.345717 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.345928 kubelet[2530]: W0912 17:37:28.345731 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.345928 kubelet[2530]: E0912 17:37:28.345743 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.348492 kubelet[2530]: E0912 17:37:28.346945 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.348492 kubelet[2530]: W0912 17:37:28.346964 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.348492 kubelet[2530]: E0912 17:37:28.346975 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.348767 kubelet[2530]: E0912 17:37:28.348750 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.348767 kubelet[2530]: W0912 17:37:28.348765 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.348858 kubelet[2530]: E0912 17:37:28.348778 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.349549 kubelet[2530]: E0912 17:37:28.349531 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.349549 kubelet[2530]: W0912 17:37:28.349544 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.349817 kubelet[2530]: E0912 17:37:28.349556 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.349817 kubelet[2530]: I0912 17:37:28.349583 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89a00481-6e24-49f9-825b-7014149c8b95-varrun\") pod \"csi-node-driver-pxnxg\" (UID: \"89a00481-6e24-49f9-825b-7014149c8b95\") " pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:28.351556 kubelet[2530]: E0912 17:37:28.350532 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.351556 kubelet[2530]: W0912 17:37:28.350549 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.351556 kubelet[2530]: E0912 17:37:28.350561 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.351556 kubelet[2530]: I0912 17:37:28.350593 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89a00481-6e24-49f9-825b-7014149c8b95-socket-dir\") pod \"csi-node-driver-pxnxg\" (UID: \"89a00481-6e24-49f9-825b-7014149c8b95\") " pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:28.351862 kubelet[2530]: E0912 17:37:28.351846 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.351862 kubelet[2530]: W0912 17:37:28.351860 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.351939 kubelet[2530]: E0912 17:37:28.351871 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.351970 kubelet[2530]: I0912 17:37:28.351961 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89a00481-6e24-49f9-825b-7014149c8b95-registration-dir\") pod \"csi-node-driver-pxnxg\" (UID: \"89a00481-6e24-49f9-825b-7014149c8b95\") " pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:28.352238 kubelet[2530]: E0912 17:37:28.352161 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.352238 kubelet[2530]: W0912 17:37:28.352172 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.352238 kubelet[2530]: E0912 17:37:28.352183 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.352623 kubelet[2530]: E0912 17:37:28.352549 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.352623 kubelet[2530]: W0912 17:37:28.352559 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.352623 kubelet[2530]: E0912 17:37:28.352577 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.353920 kubelet[2530]: E0912 17:37:28.353795 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.353920 kubelet[2530]: W0912 17:37:28.353810 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.353920 kubelet[2530]: E0912 17:37:28.353821 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.353920 kubelet[2530]: I0912 17:37:28.353845 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89a00481-6e24-49f9-825b-7014149c8b95-kubelet-dir\") pod \"csi-node-driver-pxnxg\" (UID: \"89a00481-6e24-49f9-825b-7014149c8b95\") " pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:28.355634 kubelet[2530]: E0912 17:37:28.354280 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.355634 kubelet[2530]: W0912 17:37:28.354293 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.355634 kubelet[2530]: E0912 17:37:28.354303 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.355954 kubelet[2530]: E0912 17:37:28.355830 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.355954 kubelet[2530]: W0912 17:37:28.355843 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.355954 kubelet[2530]: E0912 17:37:28.355855 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.356804 kubelet[2530]: E0912 17:37:28.356673 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.356804 kubelet[2530]: W0912 17:37:28.356685 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.356804 kubelet[2530]: E0912 17:37:28.356695 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.356804 kubelet[2530]: I0912 17:37:28.356726 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8fk8\" (UniqueName: \"kubernetes.io/projected/89a00481-6e24-49f9-825b-7014149c8b95-kube-api-access-l8fk8\") pod \"csi-node-driver-pxnxg\" (UID: \"89a00481-6e24-49f9-825b-7014149c8b95\") " pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:28.357067 kubelet[2530]: E0912 17:37:28.356960 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.357067 kubelet[2530]: W0912 17:37:28.356969 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.357067 kubelet[2530]: E0912 17:37:28.356979 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.358408 kubelet[2530]: E0912 17:37:28.357944 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.358408 kubelet[2530]: W0912 17:37:28.357959 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.358408 kubelet[2530]: E0912 17:37:28.358005 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.358876 kubelet[2530]: E0912 17:37:28.358814 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.358876 kubelet[2530]: W0912 17:37:28.358824 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.358876 kubelet[2530]: E0912 17:37:28.358837 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.361006 kubelet[2530]: E0912 17:37:28.360717 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.361006 kubelet[2530]: W0912 17:37:28.360736 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.361006 kubelet[2530]: E0912 17:37:28.360749 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.361006 kubelet[2530]: E0912 17:37:28.360964 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.361006 kubelet[2530]: W0912 17:37:28.360973 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.361006 kubelet[2530]: E0912 17:37:28.360984 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.361445 kubelet[2530]: E0912 17:37:28.361208 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.361445 kubelet[2530]: W0912 17:37:28.361217 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.361445 kubelet[2530]: E0912 17:37:28.361227 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.364650 containerd[1476]: time="2025-09-12T17:37:28.363731192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:28.364650 containerd[1476]: time="2025-09-12T17:37:28.363803318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:28.364650 containerd[1476]: time="2025-09-12T17:37:28.363819113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:28.364873 kubelet[2530]: I0912 17:37:28.364604 2530 status_manager.go:895] "Failed to get status for pod" podUID="89a00481-6e24-49f9-825b-7014149c8b95" pod="calico-system/csi-node-driver-pxnxg" err="pods \"csi-node-driver-pxnxg\" is forbidden: User \"system:node:ci-4081.3.6-9-b554e4f7b0\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-9-b554e4f7b0' and this object" Sep 12 17:37:28.368349 containerd[1476]: time="2025-09-12T17:37:28.365945857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:28.396697 systemd[1]: Started cri-containerd-d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d.scope - libcontainer container d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d. Sep 12 17:37:28.461170 kubelet[2530]: E0912 17:37:28.460628 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.461170 kubelet[2530]: W0912 17:37:28.460651 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.461170 kubelet[2530]: E0912 17:37:28.460674 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.461395 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.463916 kubelet[2530]: W0912 17:37:28.461410 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.461426 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.461998 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.463916 kubelet[2530]: W0912 17:37:28.462011 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.462024 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.462547 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.463916 kubelet[2530]: W0912 17:37:28.462560 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.462573 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.463916 kubelet[2530]: E0912 17:37:28.463873 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.464350 kubelet[2530]: W0912 17:37:28.463884 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.464350 kubelet[2530]: E0912 17:37:28.463896 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.464350 kubelet[2530]: E0912 17:37:28.464220 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.464350 kubelet[2530]: W0912 17:37:28.464235 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.464350 kubelet[2530]: E0912 17:37:28.464259 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.465013 kubelet[2530]: E0912 17:37:28.464990 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.465013 kubelet[2530]: W0912 17:37:28.465005 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.465013 kubelet[2530]: E0912 17:37:28.465016 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.466867 kubelet[2530]: E0912 17:37:28.465993 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.466867 kubelet[2530]: W0912 17:37:28.466009 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.466867 kubelet[2530]: E0912 17:37:28.466021 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.470521 kubelet[2530]: E0912 17:37:28.468858 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.470521 kubelet[2530]: W0912 17:37:28.468876 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.470521 kubelet[2530]: E0912 17:37:28.469008 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.470521 kubelet[2530]: E0912 17:37:28.469560 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.470521 kubelet[2530]: W0912 17:37:28.469574 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.470521 kubelet[2530]: E0912 17:37:28.469592 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.470991 kubelet[2530]: E0912 17:37:28.470630 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.470991 kubelet[2530]: W0912 17:37:28.470644 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.470991 kubelet[2530]: E0912 17:37:28.470657 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.470991 kubelet[2530]: E0912 17:37:28.470873 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.470991 kubelet[2530]: W0912 17:37:28.470881 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.470991 kubelet[2530]: E0912 17:37:28.470891 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.472109 kubelet[2530]: E0912 17:37:28.471660 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.472109 kubelet[2530]: W0912 17:37:28.471676 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.472109 kubelet[2530]: E0912 17:37:28.471688 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.472438 kubelet[2530]: E0912 17:37:28.472420 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.472438 kubelet[2530]: W0912 17:37:28.472436 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.472612 kubelet[2530]: E0912 17:37:28.472447 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.472878 kubelet[2530]: E0912 17:37:28.472857 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.472878 kubelet[2530]: W0912 17:37:28.472871 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.473011 kubelet[2530]: E0912 17:37:28.472882 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.473386 kubelet[2530]: E0912 17:37:28.473236 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.473386 kubelet[2530]: W0912 17:37:28.473249 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.473386 kubelet[2530]: E0912 17:37:28.473260 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.473710 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477035 kubelet[2530]: W0912 17:37:28.473724 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.473738 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.474161 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477035 kubelet[2530]: W0912 17:37:28.474172 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.474186 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.474794 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477035 kubelet[2530]: W0912 17:37:28.474805 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.474817 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477035 kubelet[2530]: E0912 17:37:28.475331 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477959 kubelet[2530]: W0912 17:37:28.475344 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.477959 kubelet[2530]: E0912 17:37:28.475452 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477959 kubelet[2530]: E0912 17:37:28.475901 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477959 kubelet[2530]: W0912 17:37:28.475913 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.477959 kubelet[2530]: E0912 17:37:28.475924 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477959 kubelet[2530]: E0912 17:37:28.476567 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477959 kubelet[2530]: W0912 17:37:28.476580 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.477959 kubelet[2530]: E0912 17:37:28.476596 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.477959 kubelet[2530]: E0912 17:37:28.477169 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.477959 kubelet[2530]: W0912 17:37:28.477180 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.479210 kubelet[2530]: E0912 17:37:28.477249 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.479210 kubelet[2530]: E0912 17:37:28.477656 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.479210 kubelet[2530]: W0912 17:37:28.477666 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.479210 kubelet[2530]: E0912 17:37:28.477677 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.479210 kubelet[2530]: E0912 17:37:28.478210 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.479210 kubelet[2530]: W0912 17:37:28.478220 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.479210 kubelet[2530]: E0912 17:37:28.478231 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.508842 kubelet[2530]: E0912 17:37:28.508800 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:28.508842 kubelet[2530]: W0912 17:37:28.508829 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:28.508842 kubelet[2530]: E0912 17:37:28.508858 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:28.522745 containerd[1476]: time="2025-09-12T17:37:28.522690642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f6jlw,Uid:df553e0a-7c5f-43ce-81b0-a8bc33866f8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\"" Sep 12 17:37:29.992822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093198352.mount: Deactivated successfully. Sep 12 17:37:30.517108 kubelet[2530]: E0912 17:37:30.516250 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pxnxg" podUID="89a00481-6e24-49f9-825b-7014149c8b95" Sep 12 17:37:31.003153 containerd[1476]: time="2025-09-12T17:37:31.003081897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:31.004333 containerd[1476]: time="2025-09-12T17:37:31.003504800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:37:31.005502 containerd[1476]: time="2025-09-12T17:37:31.004742706Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:31.008315 containerd[1476]: time="2025-09-12T17:37:31.007936298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:31.008866 containerd[1476]: time="2025-09-12T17:37:31.008834948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.844708941s" Sep 12 17:37:31.008957 containerd[1476]: time="2025-09-12T17:37:31.008869950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:37:31.011217 containerd[1476]: time="2025-09-12T17:37:31.010983113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:37:31.036762 containerd[1476]: time="2025-09-12T17:37:31.036718759Z" level=info msg="CreateContainer within sandbox \"355864089b7b89f965dba8f97428a5852bfabd46fb31e89bf4f270f069d78cd2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:37:31.061846 containerd[1476]: time="2025-09-12T17:37:31.061715255Z" level=info msg="CreateContainer within sandbox \"355864089b7b89f965dba8f97428a5852bfabd46fb31e89bf4f270f069d78cd2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fbc39f8c9ee0d3b6d916899521d4c3e251a65bd76822614b6b2534c71f225e15\"" Sep 12 17:37:31.064329 containerd[1476]: time="2025-09-12T17:37:31.064282484Z" level=info msg="StartContainer for \"fbc39f8c9ee0d3b6d916899521d4c3e251a65bd76822614b6b2534c71f225e15\"" Sep 12 17:37:31.125727 systemd[1]: Started cri-containerd-fbc39f8c9ee0d3b6d916899521d4c3e251a65bd76822614b6b2534c71f225e15.scope - libcontainer container fbc39f8c9ee0d3b6d916899521d4c3e251a65bd76822614b6b2534c71f225e15. Sep 12 17:37:31.180932 containerd[1476]: time="2025-09-12T17:37:31.180677863Z" level=info msg="StartContainer for \"fbc39f8c9ee0d3b6d916899521d4c3e251a65bd76822614b6b2534c71f225e15\" returns successfully" Sep 12 17:37:31.618593 kubelet[2530]: E0912 17:37:31.618527 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:31.643768 kubelet[2530]: I0912 17:37:31.641099 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f99d69f9d-76nl9" podStartSLOduration=1.793978435 podStartE2EDuration="4.64108181s" podCreationTimestamp="2025-09-12 17:37:27 +0000 UTC" firstStartedPulling="2025-09-12 17:37:28.16322522 +0000 UTC m=+18.792903393" lastFinishedPulling="2025-09-12 17:37:31.010328593 +0000 UTC m=+21.640006768" observedRunningTime="2025-09-12 17:37:31.640915569 +0000 UTC m=+22.270593756" watchObservedRunningTime="2025-09-12 17:37:31.64108181 +0000 UTC m=+22.270760006" Sep 12 17:37:31.671188 kubelet[2530]: E0912 17:37:31.671155 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.671601 kubelet[2530]: W0912 17:37:31.671342 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.671601 kubelet[2530]: E0912 17:37:31.671372 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.672110 kubelet[2530]: E0912 17:37:31.671703 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.672110 kubelet[2530]: W0912 17:37:31.671714 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.672110 kubelet[2530]: E0912 17:37:31.671732 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.673839 kubelet[2530]: E0912 17:37:31.673650 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.673839 kubelet[2530]: W0912 17:37:31.673670 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.673839 kubelet[2530]: E0912 17:37:31.673691 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.674206 kubelet[2530]: E0912 17:37:31.674097 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.674206 kubelet[2530]: W0912 17:37:31.674110 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.674206 kubelet[2530]: E0912 17:37:31.674134 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.674559 kubelet[2530]: E0912 17:37:31.674454 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.674559 kubelet[2530]: W0912 17:37:31.674477 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.674559 kubelet[2530]: E0912 17:37:31.674488 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.675625 kubelet[2530]: E0912 17:37:31.675549 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.675625 kubelet[2530]: W0912 17:37:31.675572 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.675625 kubelet[2530]: E0912 17:37:31.675583 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.676078 kubelet[2530]: E0912 17:37:31.676000 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.676078 kubelet[2530]: W0912 17:37:31.676011 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.676078 kubelet[2530]: E0912 17:37:31.676022 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.678839 kubelet[2530]: E0912 17:37:31.678739 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.678839 kubelet[2530]: W0912 17:37:31.678766 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.678839 kubelet[2530]: E0912 17:37:31.678782 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.679383 kubelet[2530]: E0912 17:37:31.679250 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.679383 kubelet[2530]: W0912 17:37:31.679268 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.679383 kubelet[2530]: E0912 17:37:31.679283 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.679704 kubelet[2530]: E0912 17:37:31.679631 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.679704 kubelet[2530]: W0912 17:37:31.679643 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.679704 kubelet[2530]: E0912 17:37:31.679665 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.680203 kubelet[2530]: E0912 17:37:31.680061 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.680203 kubelet[2530]: W0912 17:37:31.680094 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.680203 kubelet[2530]: E0912 17:37:31.680108 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.681881 kubelet[2530]: E0912 17:37:31.681683 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.681881 kubelet[2530]: W0912 17:37:31.681698 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.681881 kubelet[2530]: E0912 17:37:31.681712 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.682628 kubelet[2530]: E0912 17:37:31.682614 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.682765 kubelet[2530]: W0912 17:37:31.682689 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.682765 kubelet[2530]: E0912 17:37:31.682713 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.683067 kubelet[2530]: E0912 17:37:31.683056 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.683423 kubelet[2530]: W0912 17:37:31.683199 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.683423 kubelet[2530]: E0912 17:37:31.683215 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.683998 kubelet[2530]: E0912 17:37:31.683824 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.683998 kubelet[2530]: W0912 17:37:31.683837 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.683998 kubelet[2530]: E0912 17:37:31.683848 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.691868 kubelet[2530]: E0912 17:37:31.691835 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.692208 kubelet[2530]: W0912 17:37:31.692054 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.692208 kubelet[2530]: E0912 17:37:31.692099 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.692614 kubelet[2530]: E0912 17:37:31.692547 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.692614 kubelet[2530]: W0912 17:37:31.692561 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.692614 kubelet[2530]: E0912 17:37:31.692574 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.692876 kubelet[2530]: E0912 17:37:31.692855 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.692876 kubelet[2530]: W0912 17:37:31.692874 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.693082 kubelet[2530]: E0912 17:37:31.692891 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.693124 kubelet[2530]: E0912 17:37:31.693102 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.693124 kubelet[2530]: W0912 17:37:31.693110 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.693124 kubelet[2530]: E0912 17:37:31.693119 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.693308 kubelet[2530]: E0912 17:37:31.693294 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.693357 kubelet[2530]: W0912 17:37:31.693308 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.693357 kubelet[2530]: E0912 17:37:31.693323 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.693622 kubelet[2530]: E0912 17:37:31.693595 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.693622 kubelet[2530]: W0912 17:37:31.693605 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.693622 kubelet[2530]: E0912 17:37:31.693615 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.694166 kubelet[2530]: E0912 17:37:31.694011 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.694166 kubelet[2530]: W0912 17:37:31.694026 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.694166 kubelet[2530]: E0912 17:37:31.694038 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.695152 kubelet[2530]: E0912 17:37:31.695071 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.695152 kubelet[2530]: W0912 17:37:31.695084 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.695152 kubelet[2530]: E0912 17:37:31.695096 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.695574 kubelet[2530]: E0912 17:37:31.695491 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.695574 kubelet[2530]: W0912 17:37:31.695509 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.695574 kubelet[2530]: E0912 17:37:31.695524 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.696384 kubelet[2530]: E0912 17:37:31.695844 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.696384 kubelet[2530]: W0912 17:37:31.695854 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.696384 kubelet[2530]: E0912 17:37:31.695864 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.696633 kubelet[2530]: E0912 17:37:31.696570 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.696633 kubelet[2530]: W0912 17:37:31.696581 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.696633 kubelet[2530]: E0912 17:37:31.696592 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.696988 kubelet[2530]: E0912 17:37:31.696924 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.696988 kubelet[2530]: W0912 17:37:31.696935 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.696988 kubelet[2530]: E0912 17:37:31.696946 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.697727 kubelet[2530]: E0912 17:37:31.697500 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.697727 kubelet[2530]: W0912 17:37:31.697516 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.697727 kubelet[2530]: E0912 17:37:31.697533 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.699705 kubelet[2530]: E0912 17:37:31.699589 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.699705 kubelet[2530]: W0912 17:37:31.699604 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.699705 kubelet[2530]: E0912 17:37:31.699616 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.700205 kubelet[2530]: E0912 17:37:31.700068 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.700205 kubelet[2530]: W0912 17:37:31.700083 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.700205 kubelet[2530]: E0912 17:37:31.700098 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.701847 kubelet[2530]: E0912 17:37:31.701640 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.701847 kubelet[2530]: W0912 17:37:31.701655 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.701847 kubelet[2530]: E0912 17:37:31.701668 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.702137 kubelet[2530]: E0912 17:37:31.702119 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.702137 kubelet[2530]: W0912 17:37:31.702136 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.702211 kubelet[2530]: E0912 17:37:31.702149 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:31.702372 kubelet[2530]: E0912 17:37:31.702360 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:37:31.702406 kubelet[2530]: W0912 17:37:31.702373 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:37:31.702406 kubelet[2530]: E0912 17:37:31.702382 2530 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:37:32.332928 containerd[1476]: time="2025-09-12T17:37:32.332301607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:32.334132 containerd[1476]: time="2025-09-12T17:37:32.334084307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:37:32.334958 containerd[1476]: time="2025-09-12T17:37:32.334920908Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:32.338598 containerd[1476]: time="2025-09-12T17:37:32.338231834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:32.338983 containerd[1476]: time="2025-09-12T17:37:32.338947220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.327880337s" Sep 12 17:37:32.339127 containerd[1476]: time="2025-09-12T17:37:32.339104769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:37:32.344869 containerd[1476]: time="2025-09-12T17:37:32.344817613Z" level=info msg="CreateContainer within sandbox \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:37:32.386318 containerd[1476]: time="2025-09-12T17:37:32.386149240Z" level=info msg="CreateContainer within sandbox \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93\"" Sep 12 17:37:32.387086 containerd[1476]: time="2025-09-12T17:37:32.387046340Z" level=info msg="StartContainer for \"402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93\"" Sep 12 17:37:32.427222 systemd[1]: run-containerd-runc-k8s.io-402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93-runc.4atvUs.mount: Deactivated successfully. Sep 12 17:37:32.438296 systemd[1]: Started cri-containerd-402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93.scope - libcontainer container 402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93. Sep 12 17:37:32.482179 containerd[1476]: time="2025-09-12T17:37:32.482134904Z" level=info msg="StartContainer for \"402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93\" returns successfully" Sep 12 17:37:32.498566 systemd[1]: cri-containerd-402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93.scope: Deactivated successfully. Sep 12 17:37:32.515895 kubelet[2530]: E0912 17:37:32.515837 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pxnxg" podUID="89a00481-6e24-49f9-825b-7014149c8b95" Sep 12 17:37:32.574583 containerd[1476]: time="2025-09-12T17:37:32.564839070Z" level=info msg="shim disconnected" id=402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93 namespace=k8s.io Sep 12 17:37:32.574853 containerd[1476]: time="2025-09-12T17:37:32.574601905Z" level=warning msg="cleaning up after shim disconnected" id=402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93 namespace=k8s.io Sep 12 17:37:32.574853 containerd[1476]: time="2025-09-12T17:37:32.574624379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:32.640096 kubelet[2530]: I0912 17:37:32.639946 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:37:32.640563 kubelet[2530]: E0912 17:37:32.640297 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:32.643271 containerd[1476]: time="2025-09-12T17:37:32.643039331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:37:33.025612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-402adf9c13ecec6add3647a1ca0da96445bbc254a499d8a03fe4682b02ff1b93-rootfs.mount: Deactivated successfully. Sep 12 17:37:34.516905 kubelet[2530]: E0912 17:37:34.516276 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pxnxg" podUID="89a00481-6e24-49f9-825b-7014149c8b95" Sep 12 17:37:35.634059 containerd[1476]: time="2025-09-12T17:37:35.633979952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:35.635300 containerd[1476]: time="2025-09-12T17:37:35.635261434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:37:35.636397 containerd[1476]: time="2025-09-12T17:37:35.636051062Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:35.638749 containerd[1476]: time="2025-09-12T17:37:35.638711676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:35.639928 containerd[1476]: time="2025-09-12T17:37:35.639900375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.996812835s" Sep 12 17:37:35.640096 containerd[1476]: time="2025-09-12T17:37:35.639980398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:37:35.644504 containerd[1476]: time="2025-09-12T17:37:35.644232090Z" level=info msg="CreateContainer within sandbox \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:37:35.655521 containerd[1476]: time="2025-09-12T17:37:35.655460922Z" level=info msg="CreateContainer within sandbox \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2\"" Sep 12 17:37:35.656748 containerd[1476]: time="2025-09-12T17:37:35.656657121Z" level=info msg="StartContainer for \"3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2\"" Sep 12 17:37:35.721774 systemd[1]: Started cri-containerd-3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2.scope - libcontainer container 3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2. Sep 12 17:37:35.756202 containerd[1476]: time="2025-09-12T17:37:35.755341327Z" level=info msg="StartContainer for \"3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2\" returns successfully" Sep 12 17:37:36.375049 systemd[1]: cri-containerd-3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2.scope: Deactivated successfully. Sep 12 17:37:36.418126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2-rootfs.mount: Deactivated successfully. Sep 12 17:37:36.442240 containerd[1476]: time="2025-09-12T17:37:36.442169392Z" level=info msg="shim disconnected" id=3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2 namespace=k8s.io Sep 12 17:37:36.442598 containerd[1476]: time="2025-09-12T17:37:36.442467869Z" level=warning msg="cleaning up after shim disconnected" id=3f9ac9b87adfa7a00e1f3995daaf709629c80f731fcba31e0e2c33412945d0d2 namespace=k8s.io Sep 12 17:37:36.442598 containerd[1476]: time="2025-09-12T17:37:36.442505085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:37:36.453107 kubelet[2530]: I0912 17:37:36.453065 2530 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:37:36.513372 systemd[1]: Created slice kubepods-burstable-pod1d0f02c4_567d_4f58_a627_4b05e28f6a7c.slice - libcontainer container kubepods-burstable-pod1d0f02c4_567d_4f58_a627_4b05e28f6a7c.slice. Sep 12 17:37:36.528836 kubelet[2530]: I0912 17:37:36.528803 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hw7n\" (UniqueName: \"kubernetes.io/projected/1d0f02c4-567d-4f58-a627-4b05e28f6a7c-kube-api-access-4hw7n\") pod \"coredns-674b8bbfcf-qwwfs\" (UID: \"1d0f02c4-567d-4f58-a627-4b05e28f6a7c\") " pod="kube-system/coredns-674b8bbfcf-qwwfs" Sep 12 17:37:36.529569 kubelet[2530]: I0912 17:37:36.529351 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5p5\" (UniqueName: \"kubernetes.io/projected/beb3804a-cae2-4b48-8eca-eff5a936c3a3-kube-api-access-qq5p5\") pod \"calico-kube-controllers-7b985f5889-5kj7c\" (UID: \"beb3804a-cae2-4b48-8eca-eff5a936c3a3\") " pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" Sep 12 17:37:36.529569 kubelet[2530]: I0912 17:37:36.529436 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beb3804a-cae2-4b48-8eca-eff5a936c3a3-tigera-ca-bundle\") pod \"calico-kube-controllers-7b985f5889-5kj7c\" (UID: \"beb3804a-cae2-4b48-8eca-eff5a936c3a3\") " pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" Sep 12 17:37:36.529569 kubelet[2530]: I0912 17:37:36.529465 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d0f02c4-567d-4f58-a627-4b05e28f6a7c-config-volume\") pod \"coredns-674b8bbfcf-qwwfs\" (UID: \"1d0f02c4-567d-4f58-a627-4b05e28f6a7c\") " pod="kube-system/coredns-674b8bbfcf-qwwfs" Sep 12 17:37:36.529569 kubelet[2530]: I0912 17:37:36.529521 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-calico-apiserver-certs\") pod \"calico-apiserver-5d65df657f-s7j4v\" (UID: \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\") " pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" Sep 12 17:37:36.531574 kubelet[2530]: I0912 17:37:36.530322 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-ca-bundle\") pod \"whisker-78dd6cf6dc-2j2r9\" (UID: \"1a1f0215-1e66-4b28-8961-7d0751164ee8\") " pod="calico-system/whisker-78dd6cf6dc-2j2r9" Sep 12 17:37:36.531574 kubelet[2530]: I0912 17:37:36.531424 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkmx\" (UniqueName: \"kubernetes.io/projected/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-kube-api-access-bnkmx\") pod \"calico-apiserver-5d65df657f-s7j4v\" (UID: \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\") " pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" Sep 12 17:37:36.531574 kubelet[2530]: I0912 17:37:36.531462 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-backend-key-pair\") pod \"whisker-78dd6cf6dc-2j2r9\" (UID: \"1a1f0215-1e66-4b28-8961-7d0751164ee8\") " pod="calico-system/whisker-78dd6cf6dc-2j2r9" Sep 12 17:37:36.531574 kubelet[2530]: I0912 17:37:36.531513 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4zv\" (UniqueName: \"kubernetes.io/projected/1a1f0215-1e66-4b28-8961-7d0751164ee8-kube-api-access-gr4zv\") pod \"whisker-78dd6cf6dc-2j2r9\" (UID: \"1a1f0215-1e66-4b28-8961-7d0751164ee8\") " pod="calico-system/whisker-78dd6cf6dc-2j2r9" Sep 12 17:37:36.534048 systemd[1]: Created slice kubepods-besteffort-pod1a1f0215_1e66_4b28_8961_7d0751164ee8.slice - libcontainer container kubepods-besteffort-pod1a1f0215_1e66_4b28_8961_7d0751164ee8.slice. Sep 12 17:37:36.550841 systemd[1]: Created slice kubepods-burstable-podb83028d3_7d28_465e_b31c_bc68b3a14e07.slice - libcontainer container kubepods-burstable-podb83028d3_7d28_465e_b31c_bc68b3a14e07.slice. Sep 12 17:37:36.575316 systemd[1]: Created slice kubepods-besteffort-podbeb3804a_cae2_4b48_8eca_eff5a936c3a3.slice - libcontainer container kubepods-besteffort-podbeb3804a_cae2_4b48_8eca_eff5a936c3a3.slice. Sep 12 17:37:36.593274 systemd[1]: Created slice kubepods-besteffort-podef8fc9fb_bcd9_488a_b1f1_c6d7bb78dfda.slice - libcontainer container kubepods-besteffort-podef8fc9fb_bcd9_488a_b1f1_c6d7bb78dfda.slice. Sep 12 17:37:36.605024 systemd[1]: Created slice kubepods-besteffort-podff40a270_b2d1_4d12_8ac7_f6f4cae38a71.slice - libcontainer container kubepods-besteffort-podff40a270_b2d1_4d12_8ac7_f6f4cae38a71.slice. Sep 12 17:37:36.614849 systemd[1]: Created slice kubepods-besteffort-pod992872bf_a2b9_4bf7_a206_e135f15fe831.slice - libcontainer container kubepods-besteffort-pod992872bf_a2b9_4bf7_a206_e135f15fe831.slice. Sep 12 17:37:36.626245 systemd[1]: Created slice kubepods-besteffort-pod967bd530_b6b8_4b02_a727_a8f9ab0d7150.slice - libcontainer container kubepods-besteffort-pod967bd530_b6b8_4b02_a727_a8f9ab0d7150.slice. Sep 12 17:37:36.632165 kubelet[2530]: I0912 17:37:36.631726 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967bd530-b6b8-4b02-a727-a8f9ab0d7150-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-9q7df\" (UID: \"967bd530-b6b8-4b02-a727-a8f9ab0d7150\") " pod="calico-system/goldmane-54d579b49d-9q7df" Sep 12 17:37:36.632436 kubelet[2530]: I0912 17:37:36.632324 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b83028d3-7d28-465e-b31c-bc68b3a14e07-config-volume\") pod \"coredns-674b8bbfcf-jkml4\" (UID: \"b83028d3-7d28-465e-b31c-bc68b3a14e07\") " pod="kube-system/coredns-674b8bbfcf-jkml4" Sep 12 17:37:36.632436 kubelet[2530]: I0912 17:37:36.632410 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/967bd530-b6b8-4b02-a727-a8f9ab0d7150-goldmane-key-pair\") pod \"goldmane-54d579b49d-9q7df\" (UID: \"967bd530-b6b8-4b02-a727-a8f9ab0d7150\") " pod="calico-system/goldmane-54d579b49d-9q7df" Sep 12 17:37:36.633803 kubelet[2530]: I0912 17:37:36.633414 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqrb\" (UniqueName: \"kubernetes.io/projected/967bd530-b6b8-4b02-a727-a8f9ab0d7150-kube-api-access-clqrb\") pod \"goldmane-54d579b49d-9q7df\" (UID: \"967bd530-b6b8-4b02-a727-a8f9ab0d7150\") " pod="calico-system/goldmane-54d579b49d-9q7df" Sep 12 17:37:36.633803 kubelet[2530]: I0912 17:37:36.633688 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8zn\" (UniqueName: \"kubernetes.io/projected/b83028d3-7d28-465e-b31c-bc68b3a14e07-kube-api-access-bz8zn\") pod \"coredns-674b8bbfcf-jkml4\" (UID: \"b83028d3-7d28-465e-b31c-bc68b3a14e07\") " pod="kube-system/coredns-674b8bbfcf-jkml4" Sep 12 17:37:36.634087 kubelet[2530]: I0912 17:37:36.633952 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-calico-apiserver-certs\") pod \"calico-apiserver-5d65df657f-lctzq\" (UID: \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\") " pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" Sep 12 17:37:36.634087 kubelet[2530]: I0912 17:37:36.633980 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967bd530-b6b8-4b02-a727-a8f9ab0d7150-config\") pod \"goldmane-54d579b49d-9q7df\" (UID: \"967bd530-b6b8-4b02-a727-a8f9ab0d7150\") " pod="calico-system/goldmane-54d579b49d-9q7df" Sep 12 17:37:36.634087 kubelet[2530]: I0912 17:37:36.634014 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxf9f\" (UniqueName: \"kubernetes.io/projected/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-kube-api-access-kxf9f\") pod \"calico-apiserver-5d65df657f-lctzq\" (UID: \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\") " pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" Sep 12 17:37:36.634087 kubelet[2530]: I0912 17:37:36.634042 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tn6\" (UniqueName: \"kubernetes.io/projected/992872bf-a2b9-4bf7-a206-e135f15fe831-kube-api-access-m5tn6\") pod \"calico-apiserver-5479fdfc7-qt26m\" (UID: \"992872bf-a2b9-4bf7-a206-e135f15fe831\") " pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" Sep 12 17:37:36.634379 kubelet[2530]: I0912 17:37:36.634076 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/992872bf-a2b9-4bf7-a206-e135f15fe831-calico-apiserver-certs\") pod \"calico-apiserver-5479fdfc7-qt26m\" (UID: \"992872bf-a2b9-4bf7-a206-e135f15fe831\") " pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" Sep 12 17:37:36.644802 systemd[1]: Created slice kubepods-besteffort-pod89a00481_6e24_49f9_825b_7014149c8b95.slice - libcontainer container kubepods-besteffort-pod89a00481_6e24_49f9_825b_7014149c8b95.slice. Sep 12 17:37:36.655149 containerd[1476]: time="2025-09-12T17:37:36.654635940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pxnxg,Uid:89a00481-6e24-49f9-825b-7014149c8b95,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:36.676046 containerd[1476]: time="2025-09-12T17:37:36.675777218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:37:36.822383 kubelet[2530]: E0912 17:37:36.822345 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:36.826296 containerd[1476]: time="2025-09-12T17:37:36.826245810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qwwfs,Uid:1d0f02c4-567d-4f58-a627-4b05e28f6a7c,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:36.848530 containerd[1476]: time="2025-09-12T17:37:36.846587274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78dd6cf6dc-2j2r9,Uid:1a1f0215-1e66-4b28-8961-7d0751164ee8,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:36.873070 kubelet[2530]: E0912 17:37:36.872743 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:36.873327 containerd[1476]: time="2025-09-12T17:37:36.873284730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkml4,Uid:b83028d3-7d28-465e-b31c-bc68b3a14e07,Namespace:kube-system,Attempt:0,}" Sep 12 17:37:36.885261 containerd[1476]: time="2025-09-12T17:37:36.885137944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b985f5889-5kj7c,Uid:beb3804a-cae2-4b48-8eca-eff5a936c3a3,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:36.908055 containerd[1476]: time="2025-09-12T17:37:36.904762509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-s7j4v,Uid:ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:37:36.915949 containerd[1476]: time="2025-09-12T17:37:36.915802929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-lctzq,Uid:ff40a270-b2d1-4d12-8ac7-f6f4cae38a71,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:37:36.925131 containerd[1476]: time="2025-09-12T17:37:36.925091928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5479fdfc7-qt26m,Uid:992872bf-a2b9-4bf7-a206-e135f15fe831,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:37:36.933794 containerd[1476]: time="2025-09-12T17:37:36.933738972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q7df,Uid:967bd530-b6b8-4b02-a727-a8f9ab0d7150,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:37.194158 containerd[1476]: time="2025-09-12T17:37:37.194019108Z" level=error msg="Failed to destroy network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.226357 containerd[1476]: time="2025-09-12T17:37:37.226074900Z" level=error msg="encountered an error cleaning up failed sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.226876 containerd[1476]: time="2025-09-12T17:37:37.226836118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qwwfs,Uid:1d0f02c4-567d-4f58-a627-4b05e28f6a7c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.239901 containerd[1476]: time="2025-09-12T17:37:37.239748520Z" level=error msg="Failed to destroy network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.240455 containerd[1476]: time="2025-09-12T17:37:37.240382339Z" level=error msg="encountered an error cleaning up failed sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.240670 containerd[1476]: time="2025-09-12T17:37:37.240589268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78dd6cf6dc-2j2r9,Uid:1a1f0215-1e66-4b28-8961-7d0751164ee8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.245534 kubelet[2530]: E0912 17:37:37.243991 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.245534 kubelet[2530]: E0912 17:37:37.244104 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78dd6cf6dc-2j2r9" Sep 12 17:37:37.245534 kubelet[2530]: E0912 17:37:37.244140 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78dd6cf6dc-2j2r9" Sep 12 17:37:37.245771 kubelet[2530]: E0912 17:37:37.244216 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78dd6cf6dc-2j2r9_calico-system(1a1f0215-1e66-4b28-8961-7d0751164ee8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78dd6cf6dc-2j2r9_calico-system(1a1f0215-1e66-4b28-8961-7d0751164ee8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78dd6cf6dc-2j2r9" podUID="1a1f0215-1e66-4b28-8961-7d0751164ee8" Sep 12 17:37:37.245771 kubelet[2530]: E0912 17:37:37.245314 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.245771 kubelet[2530]: E0912 17:37:37.245379 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qwwfs" Sep 12 17:37:37.245913 kubelet[2530]: E0912 17:37:37.245402 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qwwfs" Sep 12 17:37:37.245913 kubelet[2530]: E0912 17:37:37.245451 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qwwfs_kube-system(1d0f02c4-567d-4f58-a627-4b05e28f6a7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qwwfs_kube-system(1d0f02c4-567d-4f58-a627-4b05e28f6a7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qwwfs" podUID="1d0f02c4-567d-4f58-a627-4b05e28f6a7c" Sep 12 17:37:37.298281 containerd[1476]: time="2025-09-12T17:37:37.298190751Z" level=error msg="Failed to destroy network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.298819 containerd[1476]: time="2025-09-12T17:37:37.298772972Z" level=error msg="encountered an error cleaning up failed sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.299581 containerd[1476]: time="2025-09-12T17:37:37.298852403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkml4,Uid:b83028d3-7d28-465e-b31c-bc68b3a14e07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.299644 kubelet[2530]: E0912 17:37:37.299131 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.299644 kubelet[2530]: E0912 17:37:37.299200 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkml4" Sep 12 17:37:37.299644 kubelet[2530]: E0912 17:37:37.299236 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkml4" Sep 12 17:37:37.300705 kubelet[2530]: E0912 17:37:37.299302 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jkml4_kube-system(b83028d3-7d28-465e-b31c-bc68b3a14e07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jkml4_kube-system(b83028d3-7d28-465e-b31c-bc68b3a14e07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkml4" podUID="b83028d3-7d28-465e-b31c-bc68b3a14e07" Sep 12 17:37:37.312237 containerd[1476]: time="2025-09-12T17:37:37.312107202Z" level=error msg="Failed to destroy network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.312906 containerd[1476]: time="2025-09-12T17:37:37.312772664Z" level=error msg="encountered an error cleaning up failed sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.312906 containerd[1476]: time="2025-09-12T17:37:37.312841266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pxnxg,Uid:89a00481-6e24-49f9-825b-7014149c8b95,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.328790 kubelet[2530]: E0912 17:37:37.328735 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.328942 kubelet[2530]: E0912 17:37:37.328818 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:37.328942 kubelet[2530]: E0912 17:37:37.328856 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pxnxg" Sep 12 17:37:37.329005 kubelet[2530]: E0912 17:37:37.328922 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pxnxg_calico-system(89a00481-6e24-49f9-825b-7014149c8b95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pxnxg_calico-system(89a00481-6e24-49f9-825b-7014149c8b95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pxnxg" podUID="89a00481-6e24-49f9-825b-7014149c8b95" Sep 12 17:37:37.344268 containerd[1476]: time="2025-09-12T17:37:37.343948023Z" level=error msg="Failed to destroy network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.344731 containerd[1476]: time="2025-09-12T17:37:37.344441550Z" level=error msg="encountered an error cleaning up failed sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.344731 containerd[1476]: time="2025-09-12T17:37:37.344532378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b985f5889-5kj7c,Uid:beb3804a-cae2-4b48-8eca-eff5a936c3a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.344877 kubelet[2530]: E0912 17:37:37.344784 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.344877 kubelet[2530]: E0912 17:37:37.344856 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" Sep 12 17:37:37.344961 kubelet[2530]: E0912 17:37:37.344892 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" Sep 12 17:37:37.344992 kubelet[2530]: E0912 17:37:37.344966 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b985f5889-5kj7c_calico-system(beb3804a-cae2-4b48-8eca-eff5a936c3a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b985f5889-5kj7c_calico-system(beb3804a-cae2-4b48-8eca-eff5a936c3a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" podUID="beb3804a-cae2-4b48-8eca-eff5a936c3a3" Sep 12 17:37:37.359432 containerd[1476]: time="2025-09-12T17:37:37.359003177Z" level=error msg="Failed to destroy network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.359891 containerd[1476]: time="2025-09-12T17:37:37.359852753Z" level=error msg="Failed to destroy network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.360359 containerd[1476]: time="2025-09-12T17:37:37.360291289Z" level=error msg="encountered an error cleaning up failed sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.360523 containerd[1476]: time="2025-09-12T17:37:37.360390597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-s7j4v,Uid:ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.361195 kubelet[2530]: E0912 17:37:37.360902 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.361195 kubelet[2530]: E0912 17:37:37.361159 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" Sep 12 17:37:37.361619 kubelet[2530]: E0912 17:37:37.361194 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" Sep 12 17:37:37.364391 kubelet[2530]: E0912 17:37:37.361628 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65df657f-s7j4v_calico-apiserver(ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65df657f-s7j4v_calico-apiserver(ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" podUID="ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda" Sep 12 17:37:37.364566 containerd[1476]: time="2025-09-12T17:37:37.364502546Z" level=error msg="encountered an error cleaning up failed sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.377793 containerd[1476]: time="2025-09-12T17:37:37.364589096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-lctzq,Uid:ff40a270-b2d1-4d12-8ac7-f6f4cae38a71,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.378500 kubelet[2530]: E0912 17:37:37.378025 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.378500 kubelet[2530]: E0912 17:37:37.378162 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" Sep 12 17:37:37.378500 kubelet[2530]: E0912 17:37:37.378198 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" Sep 12 17:37:37.379110 kubelet[2530]: E0912 17:37:37.378341 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d65df657f-lctzq_calico-apiserver(ff40a270-b2d1-4d12-8ac7-f6f4cae38a71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d65df657f-lctzq_calico-apiserver(ff40a270-b2d1-4d12-8ac7-f6f4cae38a71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" podUID="ff40a270-b2d1-4d12-8ac7-f6f4cae38a71" Sep 12 17:37:37.387118 containerd[1476]: time="2025-09-12T17:37:37.387072604Z" level=error msg="Failed to destroy network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.387649 containerd[1476]: time="2025-09-12T17:37:37.387619028Z" level=error msg="encountered an error cleaning up failed sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.387785 containerd[1476]: time="2025-09-12T17:37:37.387764539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5479fdfc7-qt26m,Uid:992872bf-a2b9-4bf7-a206-e135f15fe831,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.388179 kubelet[2530]: E0912 17:37:37.388128 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.388247 kubelet[2530]: E0912 17:37:37.388210 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" Sep 12 17:37:37.388282 kubelet[2530]: E0912 17:37:37.388251 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" Sep 12 17:37:37.388360 kubelet[2530]: E0912 17:37:37.388326 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5479fdfc7-qt26m_calico-apiserver(992872bf-a2b9-4bf7-a206-e135f15fe831)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5479fdfc7-qt26m_calico-apiserver(992872bf-a2b9-4bf7-a206-e135f15fe831)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" podUID="992872bf-a2b9-4bf7-a206-e135f15fe831" Sep 12 17:37:37.417531 containerd[1476]: time="2025-09-12T17:37:37.417452877Z" level=error msg="Failed to destroy network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.417963 containerd[1476]: time="2025-09-12T17:37:37.417923821Z" level=error msg="encountered an error cleaning up failed sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.418032 containerd[1476]: time="2025-09-12T17:37:37.418004815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q7df,Uid:967bd530-b6b8-4b02-a727-a8f9ab0d7150,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.418353 kubelet[2530]: E0912 17:37:37.418305 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.418419 kubelet[2530]: E0912 17:37:37.418387 2530 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9q7df" Sep 12 17:37:37.418449 kubelet[2530]: E0912 17:37:37.418420 2530 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-9q7df" Sep 12 17:37:37.418581 kubelet[2530]: E0912 17:37:37.418529 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-9q7df_calico-system(967bd530-b6b8-4b02-a727-a8f9ab0d7150)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-9q7df_calico-system(967bd530-b6b8-4b02-a727-a8f9ab0d7150)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-9q7df" podUID="967bd530-b6b8-4b02-a727-a8f9ab0d7150" Sep 12 17:37:37.671356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504-shm.mount: Deactivated successfully. Sep 12 17:37:37.681837 kubelet[2530]: I0912 17:37:37.681811 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:37:37.689512 kubelet[2530]: I0912 17:37:37.688659 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:37:37.689675 containerd[1476]: time="2025-09-12T17:37:37.689507553Z" level=info msg="StopPodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\"" Sep 12 17:37:37.694239 containerd[1476]: time="2025-09-12T17:37:37.693947698Z" level=info msg="StopPodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\"" Sep 12 17:37:37.695621 containerd[1476]: time="2025-09-12T17:37:37.695290778Z" level=info msg="Ensure that sandbox 59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768 in task-service has been cleanup successfully" Sep 12 17:37:37.695621 containerd[1476]: time="2025-09-12T17:37:37.695306140Z" level=info msg="Ensure that sandbox ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0 in task-service has been cleanup successfully" Sep 12 17:37:37.705095 kubelet[2530]: I0912 17:37:37.705061 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:37:37.709514 containerd[1476]: time="2025-09-12T17:37:37.709403162Z" level=info msg="StopPodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\"" Sep 12 17:37:37.712363 containerd[1476]: time="2025-09-12T17:37:37.710629486Z" level=info msg="Ensure that sandbox dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87 in task-service has been cleanup successfully" Sep 12 17:37:37.727789 kubelet[2530]: I0912 17:37:37.727760 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:37:37.729411 containerd[1476]: time="2025-09-12T17:37:37.729046147Z" level=info msg="StopPodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\"" Sep 12 17:37:37.730190 containerd[1476]: time="2025-09-12T17:37:37.730020714Z" level=info msg="Ensure that sandbox 9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897 in task-service has been cleanup successfully" Sep 12 17:37:37.738764 kubelet[2530]: I0912 17:37:37.738074 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:37:37.739814 containerd[1476]: time="2025-09-12T17:37:37.739761411Z" level=info msg="StopPodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\"" Sep 12 17:37:37.740641 containerd[1476]: time="2025-09-12T17:37:37.740586325Z" level=info msg="Ensure that sandbox c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6 in task-service has been cleanup successfully" Sep 12 17:37:37.747211 kubelet[2530]: I0912 17:37:37.747181 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:37:37.754587 containerd[1476]: time="2025-09-12T17:37:37.754544646Z" level=info msg="StopPodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\"" Sep 12 17:37:37.754818 containerd[1476]: time="2025-09-12T17:37:37.754789220Z" level=info msg="Ensure that sandbox 117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214 in task-service has been cleanup successfully" Sep 12 17:37:37.769966 kubelet[2530]: I0912 17:37:37.769920 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:37:37.775552 kubelet[2530]: I0912 17:37:37.774635 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:37:37.775730 containerd[1476]: time="2025-09-12T17:37:37.775280931Z" level=info msg="StopPodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\"" Sep 12 17:37:37.775730 containerd[1476]: time="2025-09-12T17:37:37.775548730Z" level=info msg="Ensure that sandbox bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8 in task-service has been cleanup successfully" Sep 12 17:37:37.792886 containerd[1476]: time="2025-09-12T17:37:37.792837550Z" level=info msg="StopPodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\"" Sep 12 17:37:37.793044 containerd[1476]: time="2025-09-12T17:37:37.793031179Z" level=info msg="Ensure that sandbox 1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1 in task-service has been cleanup successfully" Sep 12 17:37:37.794249 kubelet[2530]: I0912 17:37:37.793529 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:37:37.804723 containerd[1476]: time="2025-09-12T17:37:37.804658368Z" level=info msg="StopPodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\"" Sep 12 17:37:37.805106 containerd[1476]: time="2025-09-12T17:37:37.805065599Z" level=info msg="Ensure that sandbox 4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504 in task-service has been cleanup successfully" Sep 12 17:37:37.821076 containerd[1476]: time="2025-09-12T17:37:37.821028188Z" level=error msg="StopPodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" failed" error="failed to destroy network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.821828 kubelet[2530]: E0912 17:37:37.821766 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:37:37.822140 kubelet[2530]: E0912 17:37:37.821996 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768"} Sep 12 17:37:37.822290 kubelet[2530]: E0912 17:37:37.822208 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d0f02c4-567d-4f58-a627-4b05e28f6a7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.822543 kubelet[2530]: E0912 17:37:37.822239 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d0f02c4-567d-4f58-a627-4b05e28f6a7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qwwfs" podUID="1d0f02c4-567d-4f58-a627-4b05e28f6a7c" Sep 12 17:37:37.864465 containerd[1476]: time="2025-09-12T17:37:37.863455514Z" level=error msg="StopPodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" failed" error="failed to destroy network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.864637 kubelet[2530]: E0912 17:37:37.864090 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:37:37.864637 kubelet[2530]: E0912 17:37:37.864139 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0"} Sep 12 17:37:37.864637 kubelet[2530]: E0912 17:37:37.864179 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b83028d3-7d28-465e-b31c-bc68b3a14e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.864637 kubelet[2530]: E0912 17:37:37.864203 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b83028d3-7d28-465e-b31c-bc68b3a14e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkml4" podUID="b83028d3-7d28-465e-b31c-bc68b3a14e07" Sep 12 17:37:37.871078 containerd[1476]: time="2025-09-12T17:37:37.870729116Z" level=error msg="StopPodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" failed" error="failed to destroy network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.872075 kubelet[2530]: E0912 17:37:37.871530 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:37:37.872075 kubelet[2530]: E0912 17:37:37.872002 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897"} Sep 12 17:37:37.872516 kubelet[2530]: E0912 17:37:37.872214 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.872516 kubelet[2530]: E0912 17:37:37.872244 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" podUID="ff40a270-b2d1-4d12-8ac7-f6f4cae38a71" Sep 12 17:37:37.912948 containerd[1476]: time="2025-09-12T17:37:37.912699811Z" level=error msg="StopPodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" failed" error="failed to destroy network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.913124 kubelet[2530]: E0912 17:37:37.912941 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:37:37.913124 kubelet[2530]: E0912 17:37:37.912992 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6"} Sep 12 17:37:37.913124 kubelet[2530]: E0912 17:37:37.913025 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.913124 kubelet[2530]: E0912 17:37:37.913051 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" podUID="ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda" Sep 12 17:37:37.919498 containerd[1476]: time="2025-09-12T17:37:37.918467316Z" level=error msg="StopPodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" failed" error="failed to destroy network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.919665 kubelet[2530]: E0912 17:37:37.918756 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:37:37.919665 kubelet[2530]: E0912 17:37:37.918810 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87"} Sep 12 17:37:37.919665 kubelet[2530]: E0912 17:37:37.918851 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"967bd530-b6b8-4b02-a727-a8f9ab0d7150\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.919665 kubelet[2530]: E0912 17:37:37.918873 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"967bd530-b6b8-4b02-a727-a8f9ab0d7150\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-9q7df" podUID="967bd530-b6b8-4b02-a727-a8f9ab0d7150" Sep 12 17:37:37.927849 containerd[1476]: time="2025-09-12T17:37:37.927711806Z" level=error msg="StopPodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" failed" error="failed to destroy network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.928493 kubelet[2530]: E0912 17:37:37.928396 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:37:37.928748 kubelet[2530]: E0912 17:37:37.928679 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1"} Sep 12 17:37:37.928878 kubelet[2530]: E0912 17:37:37.928862 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"992872bf-a2b9-4bf7-a206-e135f15fe831\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.929330 kubelet[2530]: E0912 17:37:37.929066 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"992872bf-a2b9-4bf7-a206-e135f15fe831\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" podUID="992872bf-a2b9-4bf7-a206-e135f15fe831" Sep 12 17:37:37.941192 containerd[1476]: time="2025-09-12T17:37:37.941139996Z" level=error msg="StopPodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" failed" error="failed to destroy network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.942065 kubelet[2530]: E0912 17:37:37.941637 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:37:37.942065 kubelet[2530]: E0912 17:37:37.941693 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214"} Sep 12 17:37:37.942065 kubelet[2530]: E0912 17:37:37.941726 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"beb3804a-cae2-4b48-8eca-eff5a936c3a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.942065 kubelet[2530]: E0912 17:37:37.941756 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"beb3804a-cae2-4b48-8eca-eff5a936c3a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" podUID="beb3804a-cae2-4b48-8eca-eff5a936c3a3" Sep 12 17:37:37.942317 containerd[1476]: time="2025-09-12T17:37:37.941661552Z" level=error msg="StopPodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" failed" error="failed to destroy network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.942444 kubelet[2530]: E0912 17:37:37.941864 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:37:37.942444 kubelet[2530]: E0912 17:37:37.941933 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8"} Sep 12 17:37:37.942444 kubelet[2530]: E0912 17:37:37.942008 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a1f0215-1e66-4b28-8961-7d0751164ee8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.942444 kubelet[2530]: E0912 17:37:37.942041 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a1f0215-1e66-4b28-8961-7d0751164ee8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78dd6cf6dc-2j2r9" podUID="1a1f0215-1e66-4b28-8961-7d0751164ee8" Sep 12 17:37:37.946916 containerd[1476]: time="2025-09-12T17:37:37.946311698Z" level=error msg="StopPodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" failed" error="failed to destroy network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:37:37.947098 kubelet[2530]: E0912 17:37:37.946692 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:37:37.947098 kubelet[2530]: E0912 17:37:37.946761 2530 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504"} Sep 12 17:37:37.947098 kubelet[2530]: E0912 17:37:37.946808 2530 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89a00481-6e24-49f9-825b-7014149c8b95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:37:37.947098 kubelet[2530]: E0912 17:37:37.946852 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89a00481-6e24-49f9-825b-7014149c8b95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pxnxg" podUID="89a00481-6e24-49f9-825b-7014149c8b95" Sep 12 17:37:42.799774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556805802.mount: Deactivated successfully. Sep 12 17:37:42.898870 containerd[1476]: time="2025-09-12T17:37:42.897581633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:37:42.940607 containerd[1476]: time="2025-09-12T17:37:42.940556500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:42.973654 containerd[1476]: time="2025-09-12T17:37:42.973595115Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:42.974839 containerd[1476]: time="2025-09-12T17:37:42.974798588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.298969565s" Sep 12 17:37:42.975080 containerd[1476]: time="2025-09-12T17:37:42.975051027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:37:42.975279 containerd[1476]: time="2025-09-12T17:37:42.975250044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:43.118104 containerd[1476]: time="2025-09-12T17:37:43.118049832Z" level=info msg="CreateContainer within sandbox \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:37:43.163652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373249431.mount: Deactivated successfully. Sep 12 17:37:43.188805 containerd[1476]: time="2025-09-12T17:37:43.188615935Z" level=info msg="CreateContainer within sandbox \"d8f3a5be3410ad85a38e8defbd0c23be4cd968a0c9b662434d1963a5b7ef920d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"739bc38db7a3491579eed62b09ef4d7a38a156255f1768c44857d2a38ad200de\"" Sep 12 17:37:43.190001 containerd[1476]: time="2025-09-12T17:37:43.189489344Z" level=info msg="StartContainer for \"739bc38db7a3491579eed62b09ef4d7a38a156255f1768c44857d2a38ad200de\"" Sep 12 17:37:43.424911 systemd[1]: Started cri-containerd-739bc38db7a3491579eed62b09ef4d7a38a156255f1768c44857d2a38ad200de.scope - libcontainer container 739bc38db7a3491579eed62b09ef4d7a38a156255f1768c44857d2a38ad200de. Sep 12 17:37:43.522937 containerd[1476]: time="2025-09-12T17:37:43.522884336Z" level=info msg="StartContainer for \"739bc38db7a3491579eed62b09ef4d7a38a156255f1768c44857d2a38ad200de\" returns successfully" Sep 12 17:37:43.642654 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:37:43.645044 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:37:43.988910 containerd[1476]: time="2025-09-12T17:37:43.988627284Z" level=info msg="StopPodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\"" Sep 12 17:37:44.133979 kubelet[2530]: I0912 17:37:44.129877 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f6jlw" podStartSLOduration=2.633250757 podStartE2EDuration="17.109645121s" podCreationTimestamp="2025-09-12 17:37:27 +0000 UTC" firstStartedPulling="2025-09-12 17:37:28.52525378 +0000 UTC m=+19.154931956" lastFinishedPulling="2025-09-12 17:37:43.001648141 +0000 UTC m=+33.631326320" observedRunningTime="2025-09-12 17:37:44.002151328 +0000 UTC m=+34.631829527" watchObservedRunningTime="2025-09-12 17:37:44.109645121 +0000 UTC m=+34.739323313" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.111 [INFO][3806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.115 [INFO][3806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" iface="eth0" netns="/var/run/netns/cni-3080a318-8b0d-5329-539b-bb286770c034" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.118 [INFO][3806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" iface="eth0" netns="/var/run/netns/cni-3080a318-8b0d-5329-539b-bb286770c034" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.119 [INFO][3806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" iface="eth0" netns="/var/run/netns/cni-3080a318-8b0d-5329-539b-bb286770c034" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.119 [INFO][3806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.119 [INFO][3806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.279 [INFO][3818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.284 [INFO][3818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.284 [INFO][3818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.302 [WARNING][3818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.302 [INFO][3818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.303 [INFO][3818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:44.309803 containerd[1476]: 2025-09-12 17:37:44.306 [INFO][3806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:37:44.310786 containerd[1476]: time="2025-09-12T17:37:44.310670864Z" level=info msg="TearDown network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" successfully" Sep 12 17:37:44.310786 containerd[1476]: time="2025-09-12T17:37:44.310703587Z" level=info msg="StopPodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" returns successfully" Sep 12 17:37:44.316896 systemd[1]: run-netns-cni\x2d3080a318\x2d8b0d\x2d5329\x2d539b\x2dbb286770c034.mount: Deactivated successfully. Sep 12 17:37:44.438497 kubelet[2530]: I0912 17:37:44.438289 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-ca-bundle\") pod \"1a1f0215-1e66-4b28-8961-7d0751164ee8\" (UID: \"1a1f0215-1e66-4b28-8961-7d0751164ee8\") " Sep 12 17:37:44.438497 kubelet[2530]: I0912 17:37:44.438344 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr4zv\" (UniqueName: \"kubernetes.io/projected/1a1f0215-1e66-4b28-8961-7d0751164ee8-kube-api-access-gr4zv\") pod \"1a1f0215-1e66-4b28-8961-7d0751164ee8\" (UID: \"1a1f0215-1e66-4b28-8961-7d0751164ee8\") " Sep 12 17:37:44.438497 kubelet[2530]: I0912 17:37:44.438369 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-backend-key-pair\") pod \"1a1f0215-1e66-4b28-8961-7d0751164ee8\" (UID: \"1a1f0215-1e66-4b28-8961-7d0751164ee8\") " Sep 12 17:37:44.447365 kubelet[2530]: I0912 17:37:44.447056 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1a1f0215-1e66-4b28-8961-7d0751164ee8" (UID: "1a1f0215-1e66-4b28-8961-7d0751164ee8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:37:44.454220 kubelet[2530]: I0912 17:37:44.454174 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1f0215-1e66-4b28-8961-7d0751164ee8-kube-api-access-gr4zv" (OuterVolumeSpecName: "kube-api-access-gr4zv") pod "1a1f0215-1e66-4b28-8961-7d0751164ee8" (UID: "1a1f0215-1e66-4b28-8961-7d0751164ee8"). InnerVolumeSpecName "kube-api-access-gr4zv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:37:44.456215 systemd[1]: var-lib-kubelet-pods-1a1f0215\x2d1e66\x2d4b28\x2d8961\x2d7d0751164ee8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgr4zv.mount: Deactivated successfully. Sep 12 17:37:44.461785 kubelet[2530]: I0912 17:37:44.461707 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1a1f0215-1e66-4b28-8961-7d0751164ee8" (UID: "1a1f0215-1e66-4b28-8961-7d0751164ee8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:37:44.462266 systemd[1]: var-lib-kubelet-pods-1a1f0215\x2d1e66\x2d4b28\x2d8961\x2d7d0751164ee8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:37:44.538806 kubelet[2530]: I0912 17:37:44.538713 2530 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-ca-bundle\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:37:44.538806 kubelet[2530]: I0912 17:37:44.538761 2530 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gr4zv\" (UniqueName: \"kubernetes.io/projected/1a1f0215-1e66-4b28-8961-7d0751164ee8-kube-api-access-gr4zv\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:37:44.538806 kubelet[2530]: I0912 17:37:44.538773 2530 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1a1f0215-1e66-4b28-8961-7d0751164ee8-whisker-backend-key-pair\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:37:44.843245 systemd[1]: Removed slice kubepods-besteffort-pod1a1f0215_1e66_4b28_8961_7d0751164ee8.slice - libcontainer container kubepods-besteffort-pod1a1f0215_1e66_4b28_8961_7d0751164ee8.slice. Sep 12 17:37:44.900081 systemd[1]: run-containerd-runc-k8s.io-739bc38db7a3491579eed62b09ef4d7a38a156255f1768c44857d2a38ad200de-runc.t9vqnz.mount: Deactivated successfully. Sep 12 17:37:44.982644 kubelet[2530]: I0912 17:37:44.982597 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:37:44.984490 kubelet[2530]: E0912 17:37:44.983140 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:45.107991 systemd[1]: Created slice kubepods-besteffort-pod98212ec9_bf4f_454f_a0f2_83ce5e458a92.slice - libcontainer container kubepods-besteffort-pod98212ec9_bf4f_454f_a0f2_83ce5e458a92.slice. Sep 12 17:37:45.152497 kubelet[2530]: I0912 17:37:45.150741 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gzf7\" (UniqueName: \"kubernetes.io/projected/98212ec9-bf4f-454f-a0f2-83ce5e458a92-kube-api-access-6gzf7\") pod \"whisker-54dd68b9bb-d9lpp\" (UID: \"98212ec9-bf4f-454f-a0f2-83ce5e458a92\") " pod="calico-system/whisker-54dd68b9bb-d9lpp" Sep 12 17:37:45.153374 kubelet[2530]: I0912 17:37:45.153343 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98212ec9-bf4f-454f-a0f2-83ce5e458a92-whisker-ca-bundle\") pod \"whisker-54dd68b9bb-d9lpp\" (UID: \"98212ec9-bf4f-454f-a0f2-83ce5e458a92\") " pod="calico-system/whisker-54dd68b9bb-d9lpp" Sep 12 17:37:45.153541 kubelet[2530]: I0912 17:37:45.153394 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/98212ec9-bf4f-454f-a0f2-83ce5e458a92-whisker-backend-key-pair\") pod \"whisker-54dd68b9bb-d9lpp\" (UID: \"98212ec9-bf4f-454f-a0f2-83ce5e458a92\") " pod="calico-system/whisker-54dd68b9bb-d9lpp" Sep 12 17:37:45.413918 containerd[1476]: time="2025-09-12T17:37:45.413698931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dd68b9bb-d9lpp,Uid:98212ec9-bf4f-454f-a0f2-83ce5e458a92,Namespace:calico-system,Attempt:0,}" Sep 12 17:37:45.526070 kubelet[2530]: I0912 17:37:45.526003 2530 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a1f0215-1e66-4b28-8961-7d0751164ee8" path="/var/lib/kubelet/pods/1a1f0215-1e66-4b28-8961-7d0751164ee8/volumes" Sep 12 17:37:45.733711 systemd-networkd[1374]: cali630777863c8: Link UP Sep 12 17:37:45.737817 systemd-networkd[1374]: cali630777863c8: Gained carrier Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.539 [INFO][3904] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.564 [INFO][3904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0 whisker-54dd68b9bb- calico-system 98212ec9-bf4f-454f-a0f2-83ce5e458a92 931 0 2025-09-12 17:37:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54dd68b9bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 whisker-54dd68b9bb-d9lpp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali630777863c8 [] [] }} ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.565 [INFO][3904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.630 [INFO][3949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" HandleID="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.633 [INFO][3949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" HandleID="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"whisker-54dd68b9bb-d9lpp", "timestamp":"2025-09-12 17:37:45.63042848 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.633 [INFO][3949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.633 [INFO][3949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.633 [INFO][3949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.654 [INFO][3949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.668 [INFO][3949] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.680 [INFO][3949] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.682 [INFO][3949] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.688 [INFO][3949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.688 [INFO][3949] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.691 [INFO][3949] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051 Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.702 [INFO][3949] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.710 [INFO][3949] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.129/26] block=192.168.96.128/26 handle="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.711 [INFO][3949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.129/26] handle="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.711 [INFO][3949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:45.761451 containerd[1476]: 2025-09-12 17:37:45.711 [INFO][3949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.129/26] IPv6=[] ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" HandleID="k8s-pod-network.a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.763689 containerd[1476]: 2025-09-12 17:37:45.717 [INFO][3904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0", GenerateName:"whisker-54dd68b9bb-", Namespace:"calico-system", SelfLink:"", UID:"98212ec9-bf4f-454f-a0f2-83ce5e458a92", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54dd68b9bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"whisker-54dd68b9bb-d9lpp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali630777863c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:45.763689 containerd[1476]: 2025-09-12 17:37:45.717 [INFO][3904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.129/32] ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.763689 containerd[1476]: 2025-09-12 17:37:45.717 [INFO][3904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali630777863c8 ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.763689 containerd[1476]: 2025-09-12 17:37:45.730 [INFO][3904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.763689 containerd[1476]: 2025-09-12 17:37:45.736 [INFO][3904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0", GenerateName:"whisker-54dd68b9bb-", Namespace:"calico-system", SelfLink:"", UID:"98212ec9-bf4f-454f-a0f2-83ce5e458a92", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54dd68b9bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051", Pod:"whisker-54dd68b9bb-d9lpp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali630777863c8", MAC:"16:f6:6d:a4:0b:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:45.763689 containerd[1476]: 2025-09-12 17:37:45.751 [INFO][3904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051" Namespace="calico-system" Pod="whisker-54dd68b9bb-d9lpp" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--54dd68b9bb--d9lpp-eth0" Sep 12 17:37:45.816156 containerd[1476]: time="2025-09-12T17:37:45.816006392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:45.816156 containerd[1476]: time="2025-09-12T17:37:45.816088376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:45.816156 containerd[1476]: time="2025-09-12T17:37:45.816109199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:45.820500 containerd[1476]: time="2025-09-12T17:37:45.818179084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:45.831751 kubelet[2530]: E0912 17:37:45.831717 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:45.851719 systemd[1]: Started cri-containerd-a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051.scope - libcontainer container a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051. Sep 12 17:37:45.952372 containerd[1476]: time="2025-09-12T17:37:45.952331196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54dd68b9bb-d9lpp,Uid:98212ec9-bf4f-454f-a0f2-83ce5e458a92,Namespace:calico-system,Attempt:0,} returns sandbox id \"a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051\"" Sep 12 17:37:45.966403 containerd[1476]: time="2025-09-12T17:37:45.966360189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:37:46.117506 kernel: bpftool[4034]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:37:46.405704 systemd-networkd[1374]: vxlan.calico: Link UP Sep 12 17:37:46.405712 systemd-networkd[1374]: vxlan.calico: Gained carrier Sep 12 17:37:47.442779 containerd[1476]: time="2025-09-12T17:37:47.442731206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:47.444029 containerd[1476]: time="2025-09-12T17:37:47.443986349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:37:47.444742 containerd[1476]: time="2025-09-12T17:37:47.444712636Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:47.446970 containerd[1476]: time="2025-09-12T17:37:47.446618766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:47.447553 containerd[1476]: time="2025-09-12T17:37:47.447522319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.48112029s" Sep 12 17:37:47.447553 containerd[1476]: time="2025-09-12T17:37:47.447555036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:37:47.452153 containerd[1476]: time="2025-09-12T17:37:47.452038876Z" level=info msg="CreateContainer within sandbox \"a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:37:47.462617 containerd[1476]: time="2025-09-12T17:37:47.462451959Z" level=info msg="CreateContainer within sandbox \"a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a99bbc908d07d0a07d877ecb8d1fa685f51b58a6608539aab404afe9a786885c\"" Sep 12 17:37:47.463643 containerd[1476]: time="2025-09-12T17:37:47.463543234Z" level=info msg="StartContainer for \"a99bbc908d07d0a07d877ecb8d1fa685f51b58a6608539aab404afe9a786885c\"" Sep 12 17:37:47.480817 systemd-networkd[1374]: cali630777863c8: Gained IPv6LL Sep 12 17:37:47.543729 systemd[1]: Started cri-containerd-a99bbc908d07d0a07d877ecb8d1fa685f51b58a6608539aab404afe9a786885c.scope - libcontainer container a99bbc908d07d0a07d877ecb8d1fa685f51b58a6608539aab404afe9a786885c. Sep 12 17:37:47.597540 containerd[1476]: time="2025-09-12T17:37:47.597146589Z" level=info msg="StartContainer for \"a99bbc908d07d0a07d877ecb8d1fa685f51b58a6608539aab404afe9a786885c\" returns successfully" Sep 12 17:37:47.600868 containerd[1476]: time="2025-09-12T17:37:47.600821267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:37:48.056672 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Sep 12 17:37:49.521853 containerd[1476]: time="2025-09-12T17:37:49.520545541Z" level=info msg="StopPodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\"" Sep 12 17:37:49.521853 containerd[1476]: time="2025-09-12T17:37:49.520588635Z" level=info msg="StopPodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\"" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.665 [INFO][4190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.666 [INFO][4190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" iface="eth0" netns="/var/run/netns/cni-c80aa0cc-d2b4-e906-0c5e-027fe612f725" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.666 [INFO][4190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" iface="eth0" netns="/var/run/netns/cni-c80aa0cc-d2b4-e906-0c5e-027fe612f725" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.667 [INFO][4190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" iface="eth0" netns="/var/run/netns/cni-c80aa0cc-d2b4-e906-0c5e-027fe612f725" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.667 [INFO][4190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.669 [INFO][4190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.749 [INFO][4201] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.751 [INFO][4201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.751 [INFO][4201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.762 [WARNING][4201] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.762 [INFO][4201] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.770 [INFO][4201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:49.785034 containerd[1476]: 2025-09-12 17:37:49.777 [INFO][4190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:37:49.791696 containerd[1476]: time="2025-09-12T17:37:49.788181292Z" level=info msg="TearDown network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" successfully" Sep 12 17:37:49.792114 containerd[1476]: time="2025-09-12T17:37:49.791941631Z" level=info msg="StopPodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" returns successfully" Sep 12 17:37:49.793616 systemd[1]: run-netns-cni\x2dc80aa0cc\x2dd2b4\x2de906\x2d0c5e\x2d027fe612f725.mount: Deactivated successfully. Sep 12 17:37:49.796616 containerd[1476]: time="2025-09-12T17:37:49.796177895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pxnxg,Uid:89a00481-6e24-49f9-825b-7014149c8b95,Namespace:calico-system,Attempt:1,}" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.724 [INFO][4189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.726 [INFO][4189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" iface="eth0" netns="/var/run/netns/cni-47c87770-ad93-b095-73ce-ef0abd835ea3" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.727 [INFO][4189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" iface="eth0" netns="/var/run/netns/cni-47c87770-ad93-b095-73ce-ef0abd835ea3" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.728 [INFO][4189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" iface="eth0" netns="/var/run/netns/cni-47c87770-ad93-b095-73ce-ef0abd835ea3" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.728 [INFO][4189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.730 [INFO][4189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.907 [INFO][4207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.907 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.907 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.919 [WARNING][4207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.919 [INFO][4207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.922 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:49.935387 containerd[1476]: 2025-09-12 17:37:49.929 [INFO][4189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:37:49.940388 containerd[1476]: time="2025-09-12T17:37:49.937879601Z" level=info msg="TearDown network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" successfully" Sep 12 17:37:49.940388 containerd[1476]: time="2025-09-12T17:37:49.937933459Z" level=info msg="StopPodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" returns successfully" Sep 12 17:37:49.942975 systemd[1]: run-netns-cni\x2d47c87770\x2dad93\x2db095\x2d73ce\x2def0abd835ea3.mount: Deactivated successfully. Sep 12 17:37:49.944222 containerd[1476]: time="2025-09-12T17:37:49.944177053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-lctzq,Uid:ff40a270-b2d1-4d12-8ac7-f6f4cae38a71,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:37:50.221902 systemd-networkd[1374]: cali76b1bd58033: Link UP Sep 12 17:37:50.222244 systemd-networkd[1374]: cali76b1bd58033: Gained carrier Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.005 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0 csi-node-driver- calico-system 89a00481-6e24-49f9-825b-7014149c8b95 960 0 2025-09-12 17:37:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 csi-node-driver-pxnxg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali76b1bd58033 [] [] }} ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.006 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.108 [INFO][4239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" HandleID="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.108 [INFO][4239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" HandleID="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"csi-node-driver-pxnxg", "timestamp":"2025-09-12 17:37:50.108150276 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.109 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.109 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.109 [INFO][4239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.128 [INFO][4239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.137 [INFO][4239] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.144 [INFO][4239] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.148 [INFO][4239] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.154 [INFO][4239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.155 [INFO][4239] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.160 [INFO][4239] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.189 [INFO][4239] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.204 [INFO][4239] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.130/26] block=192.168.96.128/26 handle="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.205 [INFO][4239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.130/26] handle="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.206 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.283290 containerd[1476]: 2025-09-12 17:37:50.206 [INFO][4239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.130/26] IPv6=[] ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" HandleID="k8s-pod-network.c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.284430 containerd[1476]: 2025-09-12 17:37:50.213 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89a00481-6e24-49f9-825b-7014149c8b95", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"csi-node-driver-pxnxg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76b1bd58033", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:50.284430 containerd[1476]: 2025-09-12 17:37:50.213 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.130/32] ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.284430 containerd[1476]: 2025-09-12 17:37:50.213 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76b1bd58033 ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.284430 containerd[1476]: 2025-09-12 17:37:50.221 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.284430 containerd[1476]: 2025-09-12 17:37:50.225 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89a00481-6e24-49f9-825b-7014149c8b95", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d", Pod:"csi-node-driver-pxnxg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76b1bd58033", MAC:"46:07:24:a2:a4:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:50.284430 containerd[1476]: 2025-09-12 17:37:50.267 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d" Namespace="calico-system" Pod="csi-node-driver-pxnxg" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:37:50.362878 systemd-networkd[1374]: calib19233d6540: Link UP Sep 12 17:37:50.367562 systemd-networkd[1374]: calib19233d6540: Gained carrier Sep 12 17:37:50.391359 containerd[1476]: time="2025-09-12T17:37:50.390887752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:50.391359 containerd[1476]: time="2025-09-12T17:37:50.390959824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:50.391359 containerd[1476]: time="2025-09-12T17:37:50.391005010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:50.391359 containerd[1476]: time="2025-09-12T17:37:50.391145251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.083 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0 calico-apiserver-5d65df657f- calico-apiserver ff40a270-b2d1-4d12-8ac7-f6f4cae38a71 961 0 2025-09-12 17:37:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d65df657f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 calico-apiserver-5d65df657f-lctzq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib19233d6540 [] [] }} ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.084 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.191 [INFO][4247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" HandleID="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.191 [INFO][4247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" HandleID="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315880), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"calico-apiserver-5d65df657f-lctzq", "timestamp":"2025-09-12 17:37:50.19091676 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.191 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.206 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.206 [INFO][4247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.240 [INFO][4247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.273 [INFO][4247] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.286 [INFO][4247] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.293 [INFO][4247] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.299 [INFO][4247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.300 [INFO][4247] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.309 [INFO][4247] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.321 [INFO][4247] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.337 [INFO][4247] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.131/26] block=192.168.96.128/26 handle="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.337 [INFO][4247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.131/26] handle="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.338 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.402216 containerd[1476]: 2025-09-12 17:37:50.338 [INFO][4247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.131/26] IPv6=[] ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" HandleID="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.403513 containerd[1476]: 2025-09-12 17:37:50.344 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"calico-apiserver-5d65df657f-lctzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib19233d6540", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:50.403513 containerd[1476]: 2025-09-12 17:37:50.346 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.131/32] ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.403513 containerd[1476]: 2025-09-12 17:37:50.347 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib19233d6540 ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.403513 containerd[1476]: 2025-09-12 17:37:50.368 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.403513 containerd[1476]: 2025-09-12 17:37:50.370 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba", Pod:"calico-apiserver-5d65df657f-lctzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib19233d6540", MAC:"fa:34:db:78:1d:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:50.403513 containerd[1476]: 2025-09-12 17:37:50.391 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-lctzq" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:37:50.469825 systemd[1]: Started cri-containerd-c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d.scope - libcontainer container c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d. Sep 12 17:37:50.476850 containerd[1476]: time="2025-09-12T17:37:50.473435744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:50.476850 containerd[1476]: time="2025-09-12T17:37:50.476613810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:50.477200 containerd[1476]: time="2025-09-12T17:37:50.477062683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:50.478968 containerd[1476]: time="2025-09-12T17:37:50.477751106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:50.519883 containerd[1476]: time="2025-09-12T17:37:50.519415475Z" level=info msg="StopPodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\"" Sep 12 17:37:50.521567 containerd[1476]: time="2025-09-12T17:37:50.521185084Z" level=info msg="StopPodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\"" Sep 12 17:37:50.523666 containerd[1476]: time="2025-09-12T17:37:50.523622277Z" level=info msg="StopPodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\"" Sep 12 17:37:50.524252 systemd[1]: Started cri-containerd-90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba.scope - libcontainer container 90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba. Sep 12 17:37:50.635998 containerd[1476]: time="2025-09-12T17:37:50.635937942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pxnxg,Uid:89a00481-6e24-49f9-825b-7014149c8b95,Namespace:calico-system,Attempt:1,} returns sandbox id \"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d\"" Sep 12 17:37:50.661966 containerd[1476]: time="2025-09-12T17:37:50.661805886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-lctzq,Uid:ff40a270-b2d1-4d12-8ac7-f6f4cae38a71,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba\"" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.707 [INFO][4382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.707 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" iface="eth0" netns="/var/run/netns/cni-1a8490b6-d250-2e90-844f-495698db585a" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.707 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" iface="eth0" netns="/var/run/netns/cni-1a8490b6-d250-2e90-844f-495698db585a" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.707 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" iface="eth0" netns="/var/run/netns/cni-1a8490b6-d250-2e90-844f-495698db585a" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.707 [INFO][4382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.707 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.795 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.796 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.796 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.807 [WARNING][4402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.807 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.809 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.815780 containerd[1476]: 2025-09-12 17:37:50.811 [INFO][4382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:37:50.822284 containerd[1476]: time="2025-09-12T17:37:50.816697030Z" level=info msg="TearDown network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" successfully" Sep 12 17:37:50.822284 containerd[1476]: time="2025-09-12T17:37:50.816727850Z" level=info msg="StopPodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" returns successfully" Sep 12 17:37:50.822284 containerd[1476]: time="2025-09-12T17:37:50.819281618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-s7j4v,Uid:ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:37:50.821244 systemd[1]: run-netns-cni\x2d1a8490b6\x2dd250\x2d2e90\x2d844f\x2d495698db585a.mount: Deactivated successfully. Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.768 [INFO][4369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.769 [INFO][4369] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" iface="eth0" netns="/var/run/netns/cni-947f46e0-0de7-edea-6249-9d479ff79219" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.769 [INFO][4369] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" iface="eth0" netns="/var/run/netns/cni-947f46e0-0de7-edea-6249-9d479ff79219" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.770 [INFO][4369] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" iface="eth0" netns="/var/run/netns/cni-947f46e0-0de7-edea-6249-9d479ff79219" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.770 [INFO][4369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.770 [INFO][4369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.868 [INFO][4411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.868 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.868 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.879 [WARNING][4411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.879 [INFO][4411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.884 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.908666 containerd[1476]: 2025-09-12 17:37:50.897 [INFO][4369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:37:50.911702 containerd[1476]: time="2025-09-12T17:37:50.910921156Z" level=info msg="TearDown network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" successfully" Sep 12 17:37:50.911702 containerd[1476]: time="2025-09-12T17:37:50.910950248Z" level=info msg="StopPodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" returns successfully" Sep 12 17:37:50.914280 systemd[1]: run-netns-cni\x2d947f46e0\x2d0de7\x2dedea\x2d6249\x2d9d479ff79219.mount: Deactivated successfully. Sep 12 17:37:50.916681 containerd[1476]: time="2025-09-12T17:37:50.915161498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5479fdfc7-qt26m,Uid:992872bf-a2b9-4bf7-a206-e135f15fe831,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.788 [INFO][4370] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.789 [INFO][4370] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" iface="eth0" netns="/var/run/netns/cni-e9af20b1-af2a-bf7e-72c4-87acd0c89d9f" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.789 [INFO][4370] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" iface="eth0" netns="/var/run/netns/cni-e9af20b1-af2a-bf7e-72c4-87acd0c89d9f" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.790 [INFO][4370] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" iface="eth0" netns="/var/run/netns/cni-e9af20b1-af2a-bf7e-72c4-87acd0c89d9f" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.790 [INFO][4370] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.790 [INFO][4370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.891 [INFO][4417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.891 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.891 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.902 [WARNING][4417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.902 [INFO][4417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.905 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:50.922581 containerd[1476]: 2025-09-12 17:37:50.916 [INFO][4370] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:37:50.923133 containerd[1476]: time="2025-09-12T17:37:50.923106466Z" level=info msg="TearDown network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" successfully" Sep 12 17:37:50.923195 containerd[1476]: time="2025-09-12T17:37:50.923184705Z" level=info msg="StopPodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" returns successfully" Sep 12 17:37:50.929754 containerd[1476]: time="2025-09-12T17:37:50.929687428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b985f5889-5kj7c,Uid:beb3804a-cae2-4b48-8eca-eff5a936c3a3,Namespace:calico-system,Attempt:1,}" Sep 12 17:37:51.149452 systemd-networkd[1374]: calid4ed627ad9e: Link UP Sep 12 17:37:51.153800 systemd-networkd[1374]: calid4ed627ad9e: Gained carrier Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:50.955 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0 calico-apiserver-5d65df657f- calico-apiserver ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda 974 0 2025-09-12 17:37:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d65df657f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 calico-apiserver-5d65df657f-s7j4v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4ed627ad9e [] [] }} ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:50.955 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.047 [INFO][4457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" HandleID="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.047 [INFO][4457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" HandleID="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036cdc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"calico-apiserver-5d65df657f-s7j4v", "timestamp":"2025-09-12 17:37:51.04739961 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.047 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.047 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.047 [INFO][4457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.064 [INFO][4457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.077 [INFO][4457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.087 [INFO][4457] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.092 [INFO][4457] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.097 [INFO][4457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.097 [INFO][4457] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.100 [INFO][4457] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.110 [INFO][4457] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.128 [INFO][4457] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.132/26] block=192.168.96.128/26 handle="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.128 [INFO][4457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.132/26] handle="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.128 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:51.211664 containerd[1476]: 2025-09-12 17:37:51.128 [INFO][4457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.132/26] IPv6=[] ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" HandleID="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.212777 containerd[1476]: 2025-09-12 17:37:51.138 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"calico-apiserver-5d65df657f-s7j4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ed627ad9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.212777 containerd[1476]: 2025-09-12 17:37:51.139 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.132/32] ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.212777 containerd[1476]: 2025-09-12 17:37:51.141 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4ed627ad9e ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.212777 containerd[1476]: 2025-09-12 17:37:51.156 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.212777 containerd[1476]: 2025-09-12 17:37:51.158 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c", Pod:"calico-apiserver-5d65df657f-s7j4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ed627ad9e", MAC:"d2:ae:b0:79:67:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.212777 containerd[1476]: 2025-09-12 17:37:51.198 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Namespace="calico-apiserver" Pod="calico-apiserver-5d65df657f-s7j4v" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:37:51.301231 systemd-networkd[1374]: calidab8f63f20d: Link UP Sep 12 17:37:51.310439 systemd-networkd[1374]: calidab8f63f20d: Gained carrier Sep 12 17:37:51.341069 containerd[1476]: time="2025-09-12T17:37:51.340672175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:51.341069 containerd[1476]: time="2025-09-12T17:37:51.340818255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:51.341069 containerd[1476]: time="2025-09-12T17:37:51.340834629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.341069 containerd[1476]: time="2025-09-12T17:37:51.340957870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.054 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0 calico-kube-controllers-7b985f5889- calico-system beb3804a-cae2-4b48-8eca-eff5a936c3a3 976 0 2025-09-12 17:37:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b985f5889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 calico-kube-controllers-7b985f5889-5kj7c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidab8f63f20d [] [] }} ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.054 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.153 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" HandleID="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.161 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" HandleID="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fb50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"calico-kube-controllers-7b985f5889-5kj7c", "timestamp":"2025-09-12 17:37:51.153059231 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.162 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.162 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.162 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.183 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.207 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.224 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.229 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.240 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.240 [INFO][4474] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.251 [INFO][4474] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.269 [INFO][4474] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.284 [INFO][4474] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.133/26] block=192.168.96.128/26 handle="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.285 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.133/26] handle="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.285 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:51.350657 containerd[1476]: 2025-09-12 17:37:51.285 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.133/26] IPv6=[] ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" HandleID="k8s-pod-network.d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.352638 containerd[1476]: 2025-09-12 17:37:51.294 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0", GenerateName:"calico-kube-controllers-7b985f5889-", Namespace:"calico-system", SelfLink:"", UID:"beb3804a-cae2-4b48-8eca-eff5a936c3a3", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b985f5889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"calico-kube-controllers-7b985f5889-5kj7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidab8f63f20d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.352638 containerd[1476]: 2025-09-12 17:37:51.294 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.133/32] ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.352638 containerd[1476]: 2025-09-12 17:37:51.295 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidab8f63f20d ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.352638 containerd[1476]: 2025-09-12 17:37:51.313 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.352638 containerd[1476]: 2025-09-12 17:37:51.316 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0", GenerateName:"calico-kube-controllers-7b985f5889-", Namespace:"calico-system", SelfLink:"", UID:"beb3804a-cae2-4b48-8eca-eff5a936c3a3", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b985f5889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada", Pod:"calico-kube-controllers-7b985f5889-5kj7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidab8f63f20d", MAC:"f6:99:cc:6d:d2:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.352638 containerd[1476]: 2025-09-12 17:37:51.336 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada" Namespace="calico-system" Pod="calico-kube-controllers-7b985f5889-5kj7c" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:37:51.378693 systemd[1]: Started cri-containerd-147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c.scope - libcontainer container 147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c. Sep 12 17:37:51.420658 systemd-networkd[1374]: cali284796c55f4: Link UP Sep 12 17:37:51.427527 systemd-networkd[1374]: cali284796c55f4: Gained carrier Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.045 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0 calico-apiserver-5479fdfc7- calico-apiserver 992872bf-a2b9-4bf7-a206-e135f15fe831 975 0 2025-09-12 17:37:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5479fdfc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 calico-apiserver-5479fdfc7-qt26m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali284796c55f4 [] [] }} ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.045 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.208 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" HandleID="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.208 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" HandleID="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"calico-apiserver-5479fdfc7-qt26m", "timestamp":"2025-09-12 17:37:51.207150871 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.209 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.286 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.287 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.314 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.327 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.350 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.357 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.362 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.362 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.368 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.380 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.392 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.134/26] block=192.168.96.128/26 handle="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.393 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.134/26] handle="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.393 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:51.471387 containerd[1476]: 2025-09-12 17:37:51.393 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.134/26] IPv6=[] ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" HandleID="k8s-pod-network.2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.472032 containerd[1476]: 2025-09-12 17:37:51.402 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0", GenerateName:"calico-apiserver-5479fdfc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"992872bf-a2b9-4bf7-a206-e135f15fe831", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5479fdfc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"calico-apiserver-5479fdfc7-qt26m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali284796c55f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.472032 containerd[1476]: 2025-09-12 17:37:51.403 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.134/32] ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.472032 containerd[1476]: 2025-09-12 17:37:51.403 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali284796c55f4 ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.472032 containerd[1476]: 2025-09-12 17:37:51.429 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.472032 containerd[1476]: 2025-09-12 17:37:51.443 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0", GenerateName:"calico-apiserver-5479fdfc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"992872bf-a2b9-4bf7-a206-e135f15fe831", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5479fdfc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d", Pod:"calico-apiserver-5479fdfc7-qt26m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali284796c55f4", MAC:"52:39:f7:29:b5:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:51.472032 containerd[1476]: 2025-09-12 17:37:51.466 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-qt26m" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:37:51.479698 containerd[1476]: time="2025-09-12T17:37:51.478367202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:51.479698 containerd[1476]: time="2025-09-12T17:37:51.478433745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:51.479698 containerd[1476]: time="2025-09-12T17:37:51.478461956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.479698 containerd[1476]: time="2025-09-12T17:37:51.478611573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.524986 containerd[1476]: time="2025-09-12T17:37:51.524586444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:51.524986 containerd[1476]: time="2025-09-12T17:37:51.524706481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:51.524986 containerd[1476]: time="2025-09-12T17:37:51.524720526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.524986 containerd[1476]: time="2025-09-12T17:37:51.524904501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:51.541344 containerd[1476]: time="2025-09-12T17:37:51.541152976Z" level=info msg="StopPodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\"" Sep 12 17:37:51.541650 systemd[1]: Started cri-containerd-d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada.scope - libcontainer container d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada. Sep 12 17:37:51.553208 systemd[1]: run-netns-cni\x2de9af20b1\x2daf2a\x2dbf7e\x2d72c4\x2d87acd0c89d9f.mount: Deactivated successfully. Sep 12 17:37:51.565208 containerd[1476]: time="2025-09-12T17:37:51.565165165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d65df657f-s7j4v,Uid:ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c\"" Sep 12 17:37:51.621294 systemd[1]: Started cri-containerd-2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d.scope - libcontainer container 2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d. Sep 12 17:37:51.703166 containerd[1476]: time="2025-09-12T17:37:51.701894125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b985f5889-5kj7c,Uid:beb3804a-cae2-4b48-8eca-eff5a936c3a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada\"" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.701 [INFO][4614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.701 [INFO][4614] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" iface="eth0" netns="/var/run/netns/cni-de60fe36-a152-ca93-1eb5-e1b7454409cb" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.703 [INFO][4614] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" iface="eth0" netns="/var/run/netns/cni-de60fe36-a152-ca93-1eb5-e1b7454409cb" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.704 [INFO][4614] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" iface="eth0" netns="/var/run/netns/cni-de60fe36-a152-ca93-1eb5-e1b7454409cb" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.704 [INFO][4614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.704 [INFO][4614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.750 [INFO][4643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.750 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.750 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.764 [WARNING][4643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.767 [INFO][4643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.771 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:51.784484 containerd[1476]: 2025-09-12 17:37:51.774 [INFO][4614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:37:51.787218 containerd[1476]: time="2025-09-12T17:37:51.785673298Z" level=info msg="TearDown network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" successfully" Sep 12 17:37:51.787218 containerd[1476]: time="2025-09-12T17:37:51.786858270Z" level=info msg="StopPodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" returns successfully" Sep 12 17:37:51.789124 systemd[1]: run-netns-cni\x2dde60fe36\x2da152\x2dca93\x2d1eb5\x2de1b7454409cb.mount: Deactivated successfully. Sep 12 17:37:51.792543 kubelet[2530]: E0912 17:37:51.792184 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:51.793371 containerd[1476]: time="2025-09-12T17:37:51.792811559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qwwfs,Uid:1d0f02c4-567d-4f58-a627-4b05e28f6a7c,Namespace:kube-system,Attempt:1,}" Sep 12 17:37:51.808427 containerd[1476]: time="2025-09-12T17:37:51.808373947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5479fdfc7-qt26m,Uid:992872bf-a2b9-4bf7-a206-e135f15fe831,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d\"" Sep 12 17:37:51.863817 containerd[1476]: time="2025-09-12T17:37:51.863758381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.865603 containerd[1476]: time="2025-09-12T17:37:51.865522343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:37:51.866241 containerd[1476]: time="2025-09-12T17:37:51.866207573Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.868177 containerd[1476]: time="2025-09-12T17:37:51.868133651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:51.870285 containerd[1476]: time="2025-09-12T17:37:51.870240243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.269372691s" Sep 12 17:37:51.870285 containerd[1476]: time="2025-09-12T17:37:51.870279625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:37:51.872838 containerd[1476]: time="2025-09-12T17:37:51.872585025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:37:51.878194 containerd[1476]: time="2025-09-12T17:37:51.878151144Z" level=info msg="CreateContainer within sandbox \"a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:37:51.910833 containerd[1476]: time="2025-09-12T17:37:51.910779156Z" level=info msg="CreateContainer within sandbox \"a23ef0d890a6fbf9665d0822863c62963151f186defc1a207053f6925e8cd051\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8f0ec12cb449eafca199663afaf38f35fbec83aaed85f183c2428a05d78a7a8e\"" Sep 12 17:37:51.912907 containerd[1476]: time="2025-09-12T17:37:51.912786786Z" level=info msg="StartContainer for \"8f0ec12cb449eafca199663afaf38f35fbec83aaed85f183c2428a05d78a7a8e\"" Sep 12 17:37:51.969842 systemd[1]: Started cri-containerd-8f0ec12cb449eafca199663afaf38f35fbec83aaed85f183c2428a05d78a7a8e.scope - libcontainer container 8f0ec12cb449eafca199663afaf38f35fbec83aaed85f183c2428a05d78a7a8e. Sep 12 17:37:52.024713 systemd-networkd[1374]: calib19233d6540: Gained IPv6LL Sep 12 17:37:52.046130 systemd-networkd[1374]: cali5a71a795c31: Link UP Sep 12 17:37:52.053850 systemd-networkd[1374]: cali5a71a795c31: Gained carrier Sep 12 17:37:52.080951 containerd[1476]: time="2025-09-12T17:37:52.080866243Z" level=info msg="StartContainer for \"8f0ec12cb449eafca199663afaf38f35fbec83aaed85f183c2428a05d78a7a8e\" returns successfully" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.900 [INFO][4661] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0 coredns-674b8bbfcf- kube-system 1d0f02c4-567d-4f58-a627-4b05e28f6a7c 991 0 2025-09-12 17:37:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 coredns-674b8bbfcf-qwwfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a71a795c31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.900 [INFO][4661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.958 [INFO][4677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" HandleID="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.959 [INFO][4677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" HandleID="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"coredns-674b8bbfcf-qwwfs", "timestamp":"2025-09-12 17:37:51.958756288 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.959 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.959 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.959 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.972 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.980 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.987 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.990 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.995 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.996 [INFO][4677] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:51.998 [INFO][4677] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230 Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:52.006 [INFO][4677] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:52.030 [INFO][4677] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.135/26] block=192.168.96.128/26 handle="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:52.031 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.135/26] handle="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:52.031 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:52.092612 containerd[1476]: 2025-09-12 17:37:52.031 [INFO][4677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.135/26] IPv6=[] ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" HandleID="k8s-pod-network.2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.095564 containerd[1476]: 2025-09-12 17:37:52.038 [INFO][4661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d0f02c4-567d-4f58-a627-4b05e28f6a7c", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"coredns-674b8bbfcf-qwwfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a71a795c31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:52.095564 containerd[1476]: 2025-09-12 17:37:52.040 [INFO][4661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.135/32] ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.095564 containerd[1476]: 2025-09-12 17:37:52.040 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a71a795c31 ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.095564 containerd[1476]: 2025-09-12 17:37:52.058 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.095564 containerd[1476]: 2025-09-12 17:37:52.066 [INFO][4661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d0f02c4-567d-4f58-a627-4b05e28f6a7c", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230", Pod:"coredns-674b8bbfcf-qwwfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a71a795c31", MAC:"86:98:4b:04:f2:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:52.095564 containerd[1476]: 2025-09-12 17:37:52.088 [INFO][4661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230" Namespace="kube-system" Pod="coredns-674b8bbfcf-qwwfs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:37:52.149158 containerd[1476]: time="2025-09-12T17:37:52.148628936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:52.149158 containerd[1476]: time="2025-09-12T17:37:52.148709015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:52.149158 containerd[1476]: time="2025-09-12T17:37:52.148721213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:52.150592 containerd[1476]: time="2025-09-12T17:37:52.150149482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:52.180376 systemd[1]: Started cri-containerd-2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230.scope - libcontainer container 2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230. Sep 12 17:37:52.216659 systemd-networkd[1374]: calid4ed627ad9e: Gained IPv6LL Sep 12 17:37:52.255377 containerd[1476]: time="2025-09-12T17:37:52.255247830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qwwfs,Uid:1d0f02c4-567d-4f58-a627-4b05e28f6a7c,Namespace:kube-system,Attempt:1,} returns sandbox id \"2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230\"" Sep 12 17:37:52.260215 kubelet[2530]: E0912 17:37:52.259889 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:52.270638 containerd[1476]: time="2025-09-12T17:37:52.270535992Z" level=info msg="CreateContainer within sandbox \"2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:37:52.282206 systemd-networkd[1374]: cali76b1bd58033: Gained IPv6LL Sep 12 17:37:52.289362 containerd[1476]: time="2025-09-12T17:37:52.289283691Z" level=info msg="CreateContainer within sandbox \"2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8ffd0b5612c9f0abc004262d46ae96cac4b8171bec6df9ce9a4f44dd8feac97\"" Sep 12 17:37:52.290256 containerd[1476]: time="2025-09-12T17:37:52.290227162Z" level=info msg="StartContainer for \"f8ffd0b5612c9f0abc004262d46ae96cac4b8171bec6df9ce9a4f44dd8feac97\"" Sep 12 17:37:52.330774 systemd[1]: Started cri-containerd-f8ffd0b5612c9f0abc004262d46ae96cac4b8171bec6df9ce9a4f44dd8feac97.scope - libcontainer container f8ffd0b5612c9f0abc004262d46ae96cac4b8171bec6df9ce9a4f44dd8feac97. Sep 12 17:37:52.366900 containerd[1476]: time="2025-09-12T17:37:52.366851378Z" level=info msg="StartContainer for \"f8ffd0b5612c9f0abc004262d46ae96cac4b8171bec6df9ce9a4f44dd8feac97\" returns successfully" Sep 12 17:37:52.519362 containerd[1476]: time="2025-09-12T17:37:52.518763241Z" level=info msg="StopPodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\"" Sep 12 17:37:52.520218 containerd[1476]: time="2025-09-12T17:37:52.520184862Z" level=info msg="StopPodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\"" Sep 12 17:37:52.537402 systemd-networkd[1374]: cali284796c55f4: Gained IPv6LL Sep 12 17:37:52.560742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704191151.mount: Deactivated successfully. Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.658 [INFO][4827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.658 [INFO][4827] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" iface="eth0" netns="/var/run/netns/cni-ab575bcf-b511-1fd6-75a5-c9ad626ab25d" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.659 [INFO][4827] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" iface="eth0" netns="/var/run/netns/cni-ab575bcf-b511-1fd6-75a5-c9ad626ab25d" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.659 [INFO][4827] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" iface="eth0" netns="/var/run/netns/cni-ab575bcf-b511-1fd6-75a5-c9ad626ab25d" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.659 [INFO][4827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.659 [INFO][4827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.703 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.703 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.704 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.713 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.713 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.716 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:52.725075 containerd[1476]: 2025-09-12 17:37:52.720 [INFO][4827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:37:52.728908 containerd[1476]: time="2025-09-12T17:37:52.728868760Z" level=info msg="TearDown network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" successfully" Sep 12 17:37:52.731570 containerd[1476]: time="2025-09-12T17:37:52.729091697Z" level=info msg="StopPodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" returns successfully" Sep 12 17:37:52.734160 containerd[1476]: time="2025-09-12T17:37:52.732346521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkml4,Uid:b83028d3-7d28-465e-b31c-bc68b3a14e07,Namespace:kube-system,Attempt:1,}" Sep 12 17:37:52.734267 kubelet[2530]: E0912 17:37:52.731896 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:52.735187 systemd[1]: run-netns-cni\x2dab575bcf\x2db511\x2d1fd6\x2d75a5\x2dc9ad626ab25d.mount: Deactivated successfully. Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.653 [INFO][4824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.653 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" iface="eth0" netns="/var/run/netns/cni-a11ad2a9-4c7a-fc1a-44b7-6a5a2c2a882b" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.654 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" iface="eth0" netns="/var/run/netns/cni-a11ad2a9-4c7a-fc1a-44b7-6a5a2c2a882b" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.655 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" iface="eth0" netns="/var/run/netns/cni-a11ad2a9-4c7a-fc1a-44b7-6a5a2c2a882b" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.655 [INFO][4824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.655 [INFO][4824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.708 [INFO][4842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.709 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.716 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.732 [WARNING][4842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.732 [INFO][4842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.736 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:52.745989 containerd[1476]: 2025-09-12 17:37:52.739 [INFO][4824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:37:52.747380 containerd[1476]: time="2025-09-12T17:37:52.746592614Z" level=info msg="TearDown network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" successfully" Sep 12 17:37:52.747380 containerd[1476]: time="2025-09-12T17:37:52.746622583Z" level=info msg="StopPodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" returns successfully" Sep 12 17:37:52.748378 containerd[1476]: time="2025-09-12T17:37:52.748239479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q7df,Uid:967bd530-b6b8-4b02-a727-a8f9ab0d7150,Namespace:calico-system,Attempt:1,}" Sep 12 17:37:52.753583 systemd[1]: run-netns-cni\x2da11ad2a9\x2d4c7a\x2dfc1a\x2d44b7\x2d6a5a2c2a882b.mount: Deactivated successfully. Sep 12 17:37:52.949304 kubelet[2530]: E0912 17:37:52.949259 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:52.992276 systemd-networkd[1374]: calicb611e69c12: Link UP Sep 12 17:37:52.992648 systemd-networkd[1374]: calicb611e69c12: Gained carrier Sep 12 17:37:53.014134 kubelet[2530]: I0912 17:37:53.013884 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qwwfs" podStartSLOduration=38.013861493 podStartE2EDuration="38.013861493s" podCreationTimestamp="2025-09-12 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:52.98169032 +0000 UTC m=+43.611368514" watchObservedRunningTime="2025-09-12 17:37:53.013861493 +0000 UTC m=+43.643539689" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.829 [INFO][4855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0 coredns-674b8bbfcf- kube-system b83028d3-7d28-465e-b31c-bc68b3a14e07 1008 0 2025-09-12 17:37:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 coredns-674b8bbfcf-jkml4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb611e69c12 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.829 [INFO][4855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.897 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" HandleID="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.897 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" HandleID="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"coredns-674b8bbfcf-jkml4", "timestamp":"2025-09-12 17:37:52.897460898 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.897 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.897 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.897 [INFO][4880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.909 [INFO][4880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.916 [INFO][4880] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.922 [INFO][4880] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.925 [INFO][4880] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.928 [INFO][4880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.928 [INFO][4880] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.931 [INFO][4880] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6 Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.942 [INFO][4880] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.961 [INFO][4880] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.136/26] block=192.168.96.128/26 handle="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.961 [INFO][4880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.136/26] handle="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.961 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:53.032411 containerd[1476]: 2025-09-12 17:37:52.962 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.136/26] IPv6=[] ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" HandleID="k8s-pod-network.12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.033101 containerd[1476]: 2025-09-12 17:37:52.976 [INFO][4855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b83028d3-7d28-465e-b31c-bc68b3a14e07", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"coredns-674b8bbfcf-jkml4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb611e69c12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:53.033101 containerd[1476]: 2025-09-12 17:37:52.976 [INFO][4855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.136/32] ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.033101 containerd[1476]: 2025-09-12 17:37:52.976 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb611e69c12 ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.033101 containerd[1476]: 2025-09-12 17:37:52.983 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.033101 containerd[1476]: 2025-09-12 17:37:52.984 [INFO][4855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b83028d3-7d28-465e-b31c-bc68b3a14e07", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6", Pod:"coredns-674b8bbfcf-jkml4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb611e69c12", MAC:"fe:9f:df:b0:59:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:53.033101 containerd[1476]: 2025-09-12 17:37:53.007 [INFO][4855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkml4" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:37:53.076447 containerd[1476]: time="2025-09-12T17:37:53.073790361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:53.076447 containerd[1476]: time="2025-09-12T17:37:53.075150291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:53.076447 containerd[1476]: time="2025-09-12T17:37:53.075165395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:53.076447 containerd[1476]: time="2025-09-12T17:37:53.075268134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:53.115703 systemd[1]: Started cri-containerd-12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6.scope - libcontainer container 12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6. Sep 12 17:37:53.116126 systemd-networkd[1374]: calidab8f63f20d: Gained IPv6LL Sep 12 17:37:53.159144 systemd-networkd[1374]: cali372d93ac46f: Link UP Sep 12 17:37:53.160279 systemd-networkd[1374]: cali372d93ac46f: Gained carrier Sep 12 17:37:53.176794 systemd-networkd[1374]: cali5a71a795c31: Gained IPv6LL Sep 12 17:37:53.211583 kubelet[2530]: I0912 17:37:53.210102 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54dd68b9bb-d9lpp" podStartSLOduration=2.294034764 podStartE2EDuration="8.21008336s" podCreationTimestamp="2025-09-12 17:37:45 +0000 UTC" firstStartedPulling="2025-09-12 17:37:45.95563202 +0000 UTC m=+36.585310207" lastFinishedPulling="2025-09-12 17:37:51.871680628 +0000 UTC m=+42.501358803" observedRunningTime="2025-09-12 17:37:53.025690421 +0000 UTC m=+43.655368616" watchObservedRunningTime="2025-09-12 17:37:53.21008336 +0000 UTC m=+43.839761576" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.849 [INFO][4865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0 goldmane-54d579b49d- calico-system 967bd530-b6b8-4b02-a727-a8f9ab0d7150 1007 0 2025-09-12 17:37:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 goldmane-54d579b49d-9q7df eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali372d93ac46f [] [] }} ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.852 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.901 [INFO][4885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" HandleID="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.902 [INFO][4885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" HandleID="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"goldmane-54d579b49d-9q7df", "timestamp":"2025-09-12 17:37:52.901370688 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.902 [INFO][4885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.963 [INFO][4885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:52.963 [INFO][4885] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.012 [INFO][4885] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.050 [INFO][4885] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.064 [INFO][4885] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.068 [INFO][4885] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.077 [INFO][4885] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.077 [INFO][4885] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.085 [INFO][4885] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9 Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.101 [INFO][4885] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.117 [INFO][4885] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.137/26] block=192.168.96.128/26 handle="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.117 [INFO][4885] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.137/26] handle="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.117 [INFO][4885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:53.231704 containerd[1476]: 2025-09-12 17:37:53.117 [INFO][4885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.137/26] IPv6=[] ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" HandleID="k8s-pod-network.d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.232408 containerd[1476]: 2025-09-12 17:37:53.136 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"967bd530-b6b8-4b02-a727-a8f9ab0d7150", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"goldmane-54d579b49d-9q7df", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali372d93ac46f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:53.232408 containerd[1476]: 2025-09-12 17:37:53.136 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.137/32] ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.232408 containerd[1476]: 2025-09-12 17:37:53.137 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali372d93ac46f ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.232408 containerd[1476]: 2025-09-12 17:37:53.172 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.232408 containerd[1476]: 2025-09-12 17:37:53.174 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"967bd530-b6b8-4b02-a727-a8f9ab0d7150", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9", Pod:"goldmane-54d579b49d-9q7df", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali372d93ac46f", MAC:"6a:b0:02:0a:d1:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:53.232408 containerd[1476]: 2025-09-12 17:37:53.215 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9" Namespace="calico-system" Pod="goldmane-54d579b49d-9q7df" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:37:53.254197 containerd[1476]: time="2025-09-12T17:37:53.254055154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkml4,Uid:b83028d3-7d28-465e-b31c-bc68b3a14e07,Namespace:kube-system,Attempt:1,} returns sandbox id \"12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6\"" Sep 12 17:37:53.259732 kubelet[2530]: E0912 17:37:53.259592 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:53.269225 containerd[1476]: time="2025-09-12T17:37:53.268873153Z" level=info msg="CreateContainer within sandbox \"12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:37:53.327067 containerd[1476]: time="2025-09-12T17:37:53.310030290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:37:53.327067 containerd[1476]: time="2025-09-12T17:37:53.310109325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:37:53.327067 containerd[1476]: time="2025-09-12T17:37:53.310125825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:53.327067 containerd[1476]: time="2025-09-12T17:37:53.310283963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:37:53.348733 systemd[1]: Started cri-containerd-d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9.scope - libcontainer container d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9. Sep 12 17:37:53.397499 containerd[1476]: time="2025-09-12T17:37:53.397148328Z" level=info msg="CreateContainer within sandbox \"12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e0c0af7b74f52f7efb57fd7190d60e6cf7f5e728715782b28fc711addbce03b\"" Sep 12 17:37:53.402062 containerd[1476]: time="2025-09-12T17:37:53.402021069Z" level=info msg="StartContainer for \"3e0c0af7b74f52f7efb57fd7190d60e6cf7f5e728715782b28fc711addbce03b\"" Sep 12 17:37:53.487803 systemd[1]: Started cri-containerd-3e0c0af7b74f52f7efb57fd7190d60e6cf7f5e728715782b28fc711addbce03b.scope - libcontainer container 3e0c0af7b74f52f7efb57fd7190d60e6cf7f5e728715782b28fc711addbce03b. Sep 12 17:37:53.498834 containerd[1476]: time="2025-09-12T17:37:53.498783180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-9q7df,Uid:967bd530-b6b8-4b02-a727-a8f9ab0d7150,Namespace:calico-system,Attempt:1,} returns sandbox id \"d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9\"" Sep 12 17:37:53.556679 containerd[1476]: time="2025-09-12T17:37:53.556594934Z" level=info msg="StartContainer for \"3e0c0af7b74f52f7efb57fd7190d60e6cf7f5e728715782b28fc711addbce03b\" returns successfully" Sep 12 17:37:53.744676 containerd[1476]: time="2025-09-12T17:37:53.743511684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:53.744676 containerd[1476]: time="2025-09-12T17:37:53.744336759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:37:53.745231 containerd[1476]: time="2025-09-12T17:37:53.744687003Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:53.748517 containerd[1476]: time="2025-09-12T17:37:53.747940276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:53.749510 containerd[1476]: time="2025-09-12T17:37:53.748934560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.876314269s" Sep 12 17:37:53.749510 containerd[1476]: time="2025-09-12T17:37:53.748980114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:37:53.752514 containerd[1476]: time="2025-09-12T17:37:53.750590104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:37:53.755735 containerd[1476]: time="2025-09-12T17:37:53.755616296Z" level=info msg="CreateContainer within sandbox \"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:37:53.798735 containerd[1476]: time="2025-09-12T17:37:53.798676481Z" level=info msg="CreateContainer within sandbox \"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"75be598c5a530719a4076a20fb3172ebb996311106c4f67a200e28ee5d09c46d\"" Sep 12 17:37:53.801530 containerd[1476]: time="2025-09-12T17:37:53.799403367Z" level=info msg="StartContainer for \"75be598c5a530719a4076a20fb3172ebb996311106c4f67a200e28ee5d09c46d\"" Sep 12 17:37:53.849891 systemd[1]: Started cri-containerd-75be598c5a530719a4076a20fb3172ebb996311106c4f67a200e28ee5d09c46d.scope - libcontainer container 75be598c5a530719a4076a20fb3172ebb996311106c4f67a200e28ee5d09c46d. Sep 12 17:37:53.903158 containerd[1476]: time="2025-09-12T17:37:53.903113335Z" level=info msg="StartContainer for \"75be598c5a530719a4076a20fb3172ebb996311106c4f67a200e28ee5d09c46d\" returns successfully" Sep 12 17:37:53.966251 kubelet[2530]: E0912 17:37:53.966217 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:53.972069 kubelet[2530]: E0912 17:37:53.971985 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:54.007427 kubelet[2530]: I0912 17:37:54.006018 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jkml4" podStartSLOduration=39.005993902 podStartE2EDuration="39.005993902s" podCreationTimestamp="2025-09-12 17:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:37:53.986131575 +0000 UTC m=+44.615809778" watchObservedRunningTime="2025-09-12 17:37:54.005993902 +0000 UTC m=+44.635672129" Sep 12 17:37:54.841161 systemd-networkd[1374]: calicb611e69c12: Gained IPv6LL Sep 12 17:37:54.974800 kubelet[2530]: E0912 17:37:54.973513 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:54.974800 kubelet[2530]: E0912 17:37:54.973640 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:55.225905 systemd-networkd[1374]: cali372d93ac46f: Gained IPv6LL Sep 12 17:37:55.983718 kubelet[2530]: E0912 17:37:55.983669 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:55.984880 kubelet[2530]: E0912 17:37:55.983684 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:37:56.510518 containerd[1476]: time="2025-09-12T17:37:56.510369496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:56.511306 containerd[1476]: time="2025-09-12T17:37:56.511240966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:37:56.512620 containerd[1476]: time="2025-09-12T17:37:56.512552822Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:56.515218 containerd[1476]: time="2025-09-12T17:37:56.514994645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:56.516384 containerd[1476]: time="2025-09-12T17:37:56.516280165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.765654329s" Sep 12 17:37:56.516384 containerd[1476]: time="2025-09-12T17:37:56.516314394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:37:56.519789 containerd[1476]: time="2025-09-12T17:37:56.518772170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:37:56.524012 containerd[1476]: time="2025-09-12T17:37:56.523875025Z" level=info msg="CreateContainer within sandbox \"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:37:56.544167 containerd[1476]: time="2025-09-12T17:37:56.544069831Z" level=info msg="CreateContainer within sandbox \"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\"" Sep 12 17:37:56.546090 containerd[1476]: time="2025-09-12T17:37:56.545302906Z" level=info msg="StartContainer for \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\"" Sep 12 17:37:56.606766 systemd[1]: Started cri-containerd-5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237.scope - libcontainer container 5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237. Sep 12 17:37:56.678255 containerd[1476]: time="2025-09-12T17:37:56.678186074Z" level=info msg="StartContainer for \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\" returns successfully" Sep 12 17:37:56.923529 containerd[1476]: time="2025-09-12T17:37:56.923260249Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:37:56.923683 containerd[1476]: time="2025-09-12T17:37:56.923591906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:37:56.926819 containerd[1476]: time="2025-09-12T17:37:56.926658438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 407.844283ms" Sep 12 17:37:56.926819 containerd[1476]: time="2025-09-12T17:37:56.926705585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:37:56.928798 containerd[1476]: time="2025-09-12T17:37:56.927766782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:37:56.936867 containerd[1476]: time="2025-09-12T17:37:56.936726533Z" level=info msg="CreateContainer within sandbox \"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:37:56.953747 containerd[1476]: time="2025-09-12T17:37:56.950734598Z" level=info msg="CreateContainer within sandbox \"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\"" Sep 12 17:37:56.954992 containerd[1476]: time="2025-09-12T17:37:56.954501415Z" level=info msg="StartContainer for \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\"" Sep 12 17:37:56.991771 systemd[1]: Started cri-containerd-1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12.scope - libcontainer container 1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12. Sep 12 17:37:57.024498 kubelet[2530]: I0912 17:37:57.024136 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d65df657f-lctzq" podStartSLOduration=27.17664247 podStartE2EDuration="33.024106399s" podCreationTimestamp="2025-09-12 17:37:24 +0000 UTC" firstStartedPulling="2025-09-12 17:37:50.670646166 +0000 UTC m=+41.300324343" lastFinishedPulling="2025-09-12 17:37:56.518110097 +0000 UTC m=+47.147788272" observedRunningTime="2025-09-12 17:37:57.022620773 +0000 UTC m=+47.652298969" watchObservedRunningTime="2025-09-12 17:37:57.024106399 +0000 UTC m=+47.653784591" Sep 12 17:37:57.074847 containerd[1476]: time="2025-09-12T17:37:57.074785042Z" level=info msg="StartContainer for \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\" returns successfully" Sep 12 17:37:58.035491 kubelet[2530]: I0912 17:37:58.013501 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:37:58.960929 kubelet[2530]: I0912 17:37:58.959829 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d65df657f-s7j4v" podStartSLOduration=29.603718997 podStartE2EDuration="34.959812046s" podCreationTimestamp="2025-09-12 17:37:24 +0000 UTC" firstStartedPulling="2025-09-12 17:37:51.571498044 +0000 UTC m=+42.201176231" lastFinishedPulling="2025-09-12 17:37:56.927591093 +0000 UTC m=+47.557269280" observedRunningTime="2025-09-12 17:37:58.038074651 +0000 UTC m=+48.667752849" watchObservedRunningTime="2025-09-12 17:37:58.959812046 +0000 UTC m=+49.589490241" Sep 12 17:38:00.838514 containerd[1476]: time="2025-09-12T17:38:00.838224264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:00.840570 containerd[1476]: time="2025-09-12T17:38:00.839808497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:38:00.841334 containerd[1476]: time="2025-09-12T17:38:00.841271852Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:00.845468 containerd[1476]: time="2025-09-12T17:38:00.843962287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:00.845468 containerd[1476]: time="2025-09-12T17:38:00.844914073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.917114997s" Sep 12 17:38:00.845468 containerd[1476]: time="2025-09-12T17:38:00.844950868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:38:00.869291 containerd[1476]: time="2025-09-12T17:38:00.867162825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:38:00.973918 containerd[1476]: time="2025-09-12T17:38:00.972559789Z" level=info msg="CreateContainer within sandbox \"d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:38:01.008126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825116926.mount: Deactivated successfully. Sep 12 17:38:01.013834 containerd[1476]: time="2025-09-12T17:38:01.013788735Z" level=info msg="CreateContainer within sandbox \"d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42\"" Sep 12 17:38:01.018461 containerd[1476]: time="2025-09-12T17:38:01.016173801Z" level=info msg="StartContainer for \"60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42\"" Sep 12 17:38:01.150642 systemd[1]: Started cri-containerd-60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42.scope - libcontainer container 60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42. Sep 12 17:38:01.271788 containerd[1476]: time="2025-09-12T17:38:01.271716189Z" level=info msg="StartContainer for \"60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42\" returns successfully" Sep 12 17:38:01.312730 containerd[1476]: time="2025-09-12T17:38:01.312591777Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:01.313445 containerd[1476]: time="2025-09-12T17:38:01.313359313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:38:01.319981 containerd[1476]: time="2025-09-12T17:38:01.319862602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 452.642069ms" Sep 12 17:38:01.319981 containerd[1476]: time="2025-09-12T17:38:01.319937464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:38:01.325085 containerd[1476]: time="2025-09-12T17:38:01.324858418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:38:01.330316 containerd[1476]: time="2025-09-12T17:38:01.329788210Z" level=info msg="CreateContainer within sandbox \"2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:38:01.377782 containerd[1476]: time="2025-09-12T17:38:01.377521012Z" level=info msg="CreateContainer within sandbox \"2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e44bc8ea2ed8241edebe8fbe8138166bfbcce7251af78c21bda0cc95e2149d6\"" Sep 12 17:38:01.381505 containerd[1476]: time="2025-09-12T17:38:01.380719434Z" level=info msg="StartContainer for \"3e44bc8ea2ed8241edebe8fbe8138166bfbcce7251af78c21bda0cc95e2149d6\"" Sep 12 17:38:01.457032 systemd[1]: Started cri-containerd-3e44bc8ea2ed8241edebe8fbe8138166bfbcce7251af78c21bda0cc95e2149d6.scope - libcontainer container 3e44bc8ea2ed8241edebe8fbe8138166bfbcce7251af78c21bda0cc95e2149d6. Sep 12 17:38:01.595146 containerd[1476]: time="2025-09-12T17:38:01.593705023Z" level=info msg="StartContainer for \"3e44bc8ea2ed8241edebe8fbe8138166bfbcce7251af78c21bda0cc95e2149d6\" returns successfully" Sep 12 17:38:02.187330 kubelet[2530]: I0912 17:38:02.185634 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b985f5889-5kj7c" podStartSLOduration=25.027815995 podStartE2EDuration="34.18560479s" podCreationTimestamp="2025-09-12 17:37:28 +0000 UTC" firstStartedPulling="2025-09-12 17:37:51.708624616 +0000 UTC m=+42.338302791" lastFinishedPulling="2025-09-12 17:38:00.866413412 +0000 UTC m=+51.496091586" observedRunningTime="2025-09-12 17:38:02.149504068 +0000 UTC m=+52.779182262" watchObservedRunningTime="2025-09-12 17:38:02.18560479 +0000 UTC m=+52.815283263" Sep 12 17:38:02.231530 kubelet[2530]: I0912 17:38:02.229412 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5479fdfc7-qt26m" podStartSLOduration=27.725779101 podStartE2EDuration="37.229393671s" podCreationTimestamp="2025-09-12 17:37:25 +0000 UTC" firstStartedPulling="2025-09-12 17:37:51.818206702 +0000 UTC m=+42.447884877" lastFinishedPulling="2025-09-12 17:38:01.321821258 +0000 UTC m=+51.951499447" observedRunningTime="2025-09-12 17:38:02.223972496 +0000 UTC m=+52.853650693" watchObservedRunningTime="2025-09-12 17:38:02.229393671 +0000 UTC m=+52.859071869" Sep 12 17:38:02.487225 systemd[1]: Started sshd@7-159.223.198.129:22-147.75.109.163:49760.service - OpenSSH per-connection server daemon (147.75.109.163:49760). Sep 12 17:38:02.634800 sshd[5298]: Accepted publickey for core from 147.75.109.163 port 49760 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:02.640169 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:02.648567 systemd-logind[1447]: New session 8 of user core. Sep 12 17:38:02.655805 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:38:03.100891 kubelet[2530]: I0912 17:38:03.100581 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:38:03.705568 sshd[5298]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:03.721418 systemd[1]: sshd@7-159.223.198.129:22-147.75.109.163:49760.service: Deactivated successfully. Sep 12 17:38:03.746023 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:38:03.756221 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:38:03.763687 systemd-logind[1447]: Removed session 8. Sep 12 17:38:05.170090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188607857.mount: Deactivated successfully. Sep 12 17:38:05.833932 containerd[1476]: time="2025-09-12T17:38:05.833699147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:05.835609 containerd[1476]: time="2025-09-12T17:38:05.835016092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:38:05.835956 containerd[1476]: time="2025-09-12T17:38:05.835926353Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:05.838398 containerd[1476]: time="2025-09-12T17:38:05.838341698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:05.839607 containerd[1476]: time="2025-09-12T17:38:05.839272632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.51436662s" Sep 12 17:38:05.839607 containerd[1476]: time="2025-09-12T17:38:05.839308613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:38:05.902749 containerd[1476]: time="2025-09-12T17:38:05.902713229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:38:05.991746 containerd[1476]: time="2025-09-12T17:38:05.991619831Z" level=info msg="CreateContainer within sandbox \"d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:38:06.025053 containerd[1476]: time="2025-09-12T17:38:06.024157343Z" level=info msg="CreateContainer within sandbox \"d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff\"" Sep 12 17:38:06.030285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358415301.mount: Deactivated successfully. Sep 12 17:38:06.031278 containerd[1476]: time="2025-09-12T17:38:06.030274522Z" level=info msg="StartContainer for \"24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff\"" Sep 12 17:38:06.258317 systemd[1]: Started cri-containerd-24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff.scope - libcontainer container 24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff. Sep 12 17:38:06.363598 containerd[1476]: time="2025-09-12T17:38:06.363557271Z" level=info msg="StartContainer for \"24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff\" returns successfully" Sep 12 17:38:07.403709 systemd[1]: run-containerd-runc-k8s.io-24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff-runc.XGorAY.mount: Deactivated successfully. Sep 12 17:38:07.557428 kubelet[2530]: I0912 17:38:07.516067 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-9q7df" podStartSLOduration=28.115046644 podStartE2EDuration="40.51603027s" podCreationTimestamp="2025-09-12 17:37:27 +0000 UTC" firstStartedPulling="2025-09-12 17:37:53.504180516 +0000 UTC m=+44.133858705" lastFinishedPulling="2025-09-12 17:38:05.90516412 +0000 UTC m=+56.534842331" observedRunningTime="2025-09-12 17:38:07.449627842 +0000 UTC m=+58.079306040" watchObservedRunningTime="2025-09-12 17:38:07.51603027 +0000 UTC m=+58.145708465" Sep 12 17:38:07.821260 containerd[1476]: time="2025-09-12T17:38:07.820641170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:07.825503 containerd[1476]: time="2025-09-12T17:38:07.823740539Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:07.825503 containerd[1476]: time="2025-09-12T17:38:07.824166310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:38:07.826679 containerd[1476]: time="2025-09-12T17:38:07.826643065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:38:07.827507 containerd[1476]: time="2025-09-12T17:38:07.827459623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.924513528s" Sep 12 17:38:07.827609 containerd[1476]: time="2025-09-12T17:38:07.827595561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:38:07.842361 containerd[1476]: time="2025-09-12T17:38:07.842317980Z" level=info msg="CreateContainer within sandbox \"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:38:07.883958 containerd[1476]: time="2025-09-12T17:38:07.883380911Z" level=info msg="CreateContainer within sandbox \"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f86873ae73767f8af4f20999f1826d80d106394e64bbc0e66beb206a064c1a8d\"" Sep 12 17:38:07.895274 containerd[1476]: time="2025-09-12T17:38:07.895225089Z" level=info msg="StartContainer for \"f86873ae73767f8af4f20999f1826d80d106394e64bbc0e66beb206a064c1a8d\"" Sep 12 17:38:07.951695 systemd[1]: Started cri-containerd-f86873ae73767f8af4f20999f1826d80d106394e64bbc0e66beb206a064c1a8d.scope - libcontainer container f86873ae73767f8af4f20999f1826d80d106394e64bbc0e66beb206a064c1a8d. Sep 12 17:38:07.994764 containerd[1476]: time="2025-09-12T17:38:07.994727249Z" level=info msg="StartContainer for \"f86873ae73767f8af4f20999f1826d80d106394e64bbc0e66beb206a064c1a8d\" returns successfully" Sep 12 17:38:08.378971 kubelet[2530]: I0912 17:38:08.375933 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pxnxg" podStartSLOduration=23.146190026 podStartE2EDuration="40.336038839s" podCreationTimestamp="2025-09-12 17:37:28 +0000 UTC" firstStartedPulling="2025-09-12 17:37:50.644765516 +0000 UTC m=+41.274443707" lastFinishedPulling="2025-09-12 17:38:07.834614332 +0000 UTC m=+58.464292520" observedRunningTime="2025-09-12 17:38:08.307806576 +0000 UTC m=+58.937484767" watchObservedRunningTime="2025-09-12 17:38:08.336038839 +0000 UTC m=+58.965717038" Sep 12 17:38:08.735894 systemd[1]: Started sshd@8-159.223.198.129:22-147.75.109.163:49776.service - OpenSSH per-connection server daemon (147.75.109.163:49776). Sep 12 17:38:08.776846 kubelet[2530]: I0912 17:38:08.774645 2530 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:38:08.789510 kubelet[2530]: I0912 17:38:08.789395 2530 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:38:08.935651 sshd[5459]: Accepted publickey for core from 147.75.109.163 port 49776 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:08.937939 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:08.944850 systemd-logind[1447]: New session 9 of user core. Sep 12 17:38:08.951750 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:38:09.706081 sshd[5459]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:09.711983 systemd[1]: sshd@8-159.223.198.129:22-147.75.109.163:49776.service: Deactivated successfully. Sep 12 17:38:09.717096 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:38:09.721745 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:38:09.723317 systemd-logind[1447]: Removed session 9. Sep 12 17:38:09.752253 containerd[1476]: time="2025-09-12T17:38:09.752022391Z" level=info msg="StopPodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\"" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.025 [WARNING][5505] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"967bd530-b6b8-4b02-a727-a8f9ab0d7150", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9", Pod:"goldmane-54d579b49d-9q7df", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali372d93ac46f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.028 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.028 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" iface="eth0" netns="" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.028 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.028 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.178 [INFO][5512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.181 [INFO][5512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.181 [INFO][5512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.193 [WARNING][5512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.193 [INFO][5512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.195 [INFO][5512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.200551 containerd[1476]: 2025-09-12 17:38:10.197 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.202730 containerd[1476]: time="2025-09-12T17:38:10.201714919Z" level=info msg="TearDown network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" successfully" Sep 12 17:38:10.202730 containerd[1476]: time="2025-09-12T17:38:10.201767551Z" level=info msg="StopPodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" returns successfully" Sep 12 17:38:10.319332 containerd[1476]: time="2025-09-12T17:38:10.319254565Z" level=info msg="RemovePodSandbox for \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\"" Sep 12 17:38:10.324261 containerd[1476]: time="2025-09-12T17:38:10.324194098Z" level=info msg="Forcibly stopping sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\"" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.372 [WARNING][5526] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"967bd530-b6b8-4b02-a727-a8f9ab0d7150", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"d2ed9922b17b679a25a240298120c3c2a18aa51333220863409973f6121304b9", Pod:"goldmane-54d579b49d-9q7df", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali372d93ac46f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.372 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.372 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" iface="eth0" netns="" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.372 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.372 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.402 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.402 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.402 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.409 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.409 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" HandleID="k8s-pod-network.dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-goldmane--54d579b49d--9q7df-eth0" Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.411 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.418527 containerd[1476]: 2025-09-12 17:38:10.415 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87" Sep 12 17:38:10.419397 containerd[1476]: time="2025-09-12T17:38:10.419133966Z" level=info msg="TearDown network for sandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" successfully" Sep 12 17:38:10.513177 containerd[1476]: time="2025-09-12T17:38:10.512428270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:10.521714 containerd[1476]: time="2025-09-12T17:38:10.521649751Z" level=info msg="RemovePodSandbox \"dd7da4bb25e46d70e7fd45a2f2fa1e9de05b461e43c6c34648324941aa1b2a87\" returns successfully" Sep 12 17:38:10.530236 containerd[1476]: time="2025-09-12T17:38:10.530176280Z" level=info msg="StopPodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\"" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.574 [WARNING][5547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0", GenerateName:"calico-apiserver-5479fdfc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"992872bf-a2b9-4bf7-a206-e135f15fe831", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5479fdfc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d", Pod:"calico-apiserver-5479fdfc7-qt26m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali284796c55f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.575 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.575 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" iface="eth0" netns="" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.575 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.575 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.600 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.600 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.600 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.609 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.609 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.611 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.616889 containerd[1476]: 2025-09-12 17:38:10.614 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.618639 containerd[1476]: time="2025-09-12T17:38:10.616998379Z" level=info msg="TearDown network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" successfully" Sep 12 17:38:10.618639 containerd[1476]: time="2025-09-12T17:38:10.617455786Z" level=info msg="StopPodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" returns successfully" Sep 12 17:38:10.618639 containerd[1476]: time="2025-09-12T17:38:10.618120836Z" level=info msg="RemovePodSandbox for \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\"" Sep 12 17:38:10.618639 containerd[1476]: time="2025-09-12T17:38:10.618149504Z" level=info msg="Forcibly stopping sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\"" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.660 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0", GenerateName:"calico-apiserver-5479fdfc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"992872bf-a2b9-4bf7-a206-e135f15fe831", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5479fdfc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"2bcecbc652af8f33501bfdc132c9bf0f12000daa5382528537506dff54cbb88d", Pod:"calico-apiserver-5479fdfc7-qt26m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali284796c55f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.660 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.660 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" iface="eth0" netns="" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.660 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.660 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.687 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.687 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.688 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.697 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.697 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" HandleID="k8s-pod-network.1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--qt26m-eth0" Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.699 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.704136 containerd[1476]: 2025-09-12 17:38:10.702 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1" Sep 12 17:38:10.705450 containerd[1476]: time="2025-09-12T17:38:10.704819826Z" level=info msg="TearDown network for sandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" successfully" Sep 12 17:38:10.708346 containerd[1476]: time="2025-09-12T17:38:10.708290271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:10.708508 containerd[1476]: time="2025-09-12T17:38:10.708400250Z" level=info msg="RemovePodSandbox \"1b7692d6d1a5d24ecd1cbdbd17c0eba2fb77327405ddd4df11fb26667b5ed4b1\" returns successfully" Sep 12 17:38:10.709052 containerd[1476]: time="2025-09-12T17:38:10.709019006Z" level=info msg="StopPodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\"" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.760 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b83028d3-7d28-465e-b31c-bc68b3a14e07", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6", Pod:"coredns-674b8bbfcf-jkml4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb611e69c12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.761 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.761 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" iface="eth0" netns="" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.761 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.761 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.793 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.795 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.795 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.803 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.803 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.806 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.813548 containerd[1476]: 2025-09-12 17:38:10.810 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.813548 containerd[1476]: time="2025-09-12T17:38:10.813099847Z" level=info msg="TearDown network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" successfully" Sep 12 17:38:10.813548 containerd[1476]: time="2025-09-12T17:38:10.813125447Z" level=info msg="StopPodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" returns successfully" Sep 12 17:38:10.816896 containerd[1476]: time="2025-09-12T17:38:10.813920436Z" level=info msg="RemovePodSandbox for \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\"" Sep 12 17:38:10.816896 containerd[1476]: time="2025-09-12T17:38:10.813948119Z" level=info msg="Forcibly stopping sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\"" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.860 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b83028d3-7d28-465e-b31c-bc68b3a14e07", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"12aa75f6a9776ccd0947386660b86b25cc0d5b557ba7a53b109e62f929e470e6", Pod:"coredns-674b8bbfcf-jkml4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb611e69c12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.860 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.860 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" iface="eth0" netns="" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.861 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.861 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.895 [INFO][5620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.895 [INFO][5620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.895 [INFO][5620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.902 [WARNING][5620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.902 [INFO][5620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" HandleID="k8s-pod-network.ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--jkml4-eth0" Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.904 [INFO][5620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:10.911741 containerd[1476]: 2025-09-12 17:38:10.906 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0" Sep 12 17:38:10.912861 containerd[1476]: time="2025-09-12T17:38:10.912644630Z" level=info msg="TearDown network for sandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" successfully" Sep 12 17:38:10.916290 containerd[1476]: time="2025-09-12T17:38:10.916230462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:10.916422 containerd[1476]: time="2025-09-12T17:38:10.916313073Z" level=info msg="RemovePodSandbox \"ecf04b6e65ee20874796937f37276dd3373ff5cb613a26c9a3829494e0bf88d0\" returns successfully" Sep 12 17:38:10.916906 containerd[1476]: time="2025-09-12T17:38:10.916872200Z" level=info msg="StopPodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\"" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:10.971 [WARNING][5634] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba", Pod:"calico-apiserver-5d65df657f-lctzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib19233d6540", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:10.972 [INFO][5634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:10.972 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" iface="eth0" netns="" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:10.972 [INFO][5634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:10.972 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.001 [INFO][5641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.001 [INFO][5641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.002 [INFO][5641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.012 [WARNING][5641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.012 [INFO][5641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.016 [INFO][5641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.021742 containerd[1476]: 2025-09-12 17:38:11.019 [INFO][5634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.023195 containerd[1476]: time="2025-09-12T17:38:11.021719221Z" level=info msg="TearDown network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" successfully" Sep 12 17:38:11.023195 containerd[1476]: time="2025-09-12T17:38:11.022097953Z" level=info msg="StopPodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" returns successfully" Sep 12 17:38:11.023195 containerd[1476]: time="2025-09-12T17:38:11.023054017Z" level=info msg="RemovePodSandbox for \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\"" Sep 12 17:38:11.023195 containerd[1476]: time="2025-09-12T17:38:11.023083545Z" level=info msg="Forcibly stopping sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\"" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.070 [WARNING][5655] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba", Pod:"calico-apiserver-5d65df657f-lctzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib19233d6540", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.071 [INFO][5655] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.071 [INFO][5655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" iface="eth0" netns="" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.071 [INFO][5655] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.071 [INFO][5655] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.107 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.107 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.107 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.118 [WARNING][5662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.118 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" HandleID="k8s-pod-network.9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.122 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.131605 containerd[1476]: 2025-09-12 17:38:11.128 [INFO][5655] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897" Sep 12 17:38:11.131605 containerd[1476]: time="2025-09-12T17:38:11.131376568Z" level=info msg="TearDown network for sandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" successfully" Sep 12 17:38:11.136224 containerd[1476]: time="2025-09-12T17:38:11.136168148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:11.136541 containerd[1476]: time="2025-09-12T17:38:11.136262543Z" level=info msg="RemovePodSandbox \"9b244803fef1b7def664a5613d5252d3ef03a696e86b70e0d112365c3a590897\" returns successfully" Sep 12 17:38:11.137631 containerd[1476]: time="2025-09-12T17:38:11.137560239Z" level=info msg="StopPodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\"" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.180 [WARNING][5676] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89a00481-6e24-49f9-825b-7014149c8b95", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d", Pod:"csi-node-driver-pxnxg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76b1bd58033", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.180 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.180 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" iface="eth0" netns="" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.180 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.180 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.208 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.208 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.208 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.215 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.215 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.217 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.221741 containerd[1476]: 2025-09-12 17:38:11.219 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.223287 containerd[1476]: time="2025-09-12T17:38:11.221768659Z" level=info msg="TearDown network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" successfully" Sep 12 17:38:11.223287 containerd[1476]: time="2025-09-12T17:38:11.221839448Z" level=info msg="StopPodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" returns successfully" Sep 12 17:38:11.223287 containerd[1476]: time="2025-09-12T17:38:11.222338947Z" level=info msg="RemovePodSandbox for \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\"" Sep 12 17:38:11.223287 containerd[1476]: time="2025-09-12T17:38:11.222363500Z" level=info msg="Forcibly stopping sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\"" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.263 [WARNING][5697] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89a00481-6e24-49f9-825b-7014149c8b95", ResourceVersion:"1185", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"c17eac73d7fbc1306dec20cd235260b8da6ad6a757b2eeb682c86119fee2b32d", Pod:"csi-node-driver-pxnxg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali76b1bd58033", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.264 [INFO][5697] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.264 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" iface="eth0" netns="" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.264 [INFO][5697] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.264 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.300 [INFO][5705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.300 [INFO][5705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.300 [INFO][5705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.310 [WARNING][5705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.310 [INFO][5705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" HandleID="k8s-pod-network.4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-csi--node--driver--pxnxg-eth0" Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.314 [INFO][5705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.322562 containerd[1476]: 2025-09-12 17:38:11.316 [INFO][5697] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504" Sep 12 17:38:11.326434 containerd[1476]: time="2025-09-12T17:38:11.321706566Z" level=info msg="TearDown network for sandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" successfully" Sep 12 17:38:11.327926 containerd[1476]: time="2025-09-12T17:38:11.327882043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:11.328160 containerd[1476]: time="2025-09-12T17:38:11.328141141Z" level=info msg="RemovePodSandbox \"4338c2da52579dd0537b90fba7c05f3615e7b2c0e6bb1a49c8b2911e6d487504\" returns successfully" Sep 12 17:38:11.330118 containerd[1476]: time="2025-09-12T17:38:11.330072258Z" level=info msg="StopPodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\"" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.379 [WARNING][5719] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.380 [INFO][5719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.380 [INFO][5719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" iface="eth0" netns="" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.380 [INFO][5719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.380 [INFO][5719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.414 [INFO][5726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.414 [INFO][5726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.415 [INFO][5726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.422 [WARNING][5726] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.422 [INFO][5726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.425 [INFO][5726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.434534 containerd[1476]: 2025-09-12 17:38:11.428 [INFO][5719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.434534 containerd[1476]: time="2025-09-12T17:38:11.432421394Z" level=info msg="TearDown network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" successfully" Sep 12 17:38:11.434534 containerd[1476]: time="2025-09-12T17:38:11.432446632Z" level=info msg="StopPodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" returns successfully" Sep 12 17:38:11.437008 containerd[1476]: time="2025-09-12T17:38:11.435453722Z" level=info msg="RemovePodSandbox for \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\"" Sep 12 17:38:11.437008 containerd[1476]: time="2025-09-12T17:38:11.435543439Z" level=info msg="Forcibly stopping sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\"" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.488 [WARNING][5740] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.488 [INFO][5740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.488 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" iface="eth0" netns="" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.488 [INFO][5740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.488 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.515 [INFO][5747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.515 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.515 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.526 [WARNING][5747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.526 [INFO][5747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" HandleID="k8s-pod-network.bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-whisker--78dd6cf6dc--2j2r9-eth0" Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.529 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.537148 containerd[1476]: 2025-09-12 17:38:11.532 [INFO][5740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8" Sep 12 17:38:11.537148 containerd[1476]: time="2025-09-12T17:38:11.536352251Z" level=info msg="TearDown network for sandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" successfully" Sep 12 17:38:11.538954 containerd[1476]: time="2025-09-12T17:38:11.538921780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:11.539070 containerd[1476]: time="2025-09-12T17:38:11.538995452Z" level=info msg="RemovePodSandbox \"bdb1c5ad2c805a20609a6e3026654e56b05ad828f875fcaa781d223e4ff856b8\" returns successfully" Sep 12 17:38:11.539515 containerd[1476]: time="2025-09-12T17:38:11.539493663Z" level=info msg="StopPodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\"" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.591 [WARNING][5761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d0f02c4-567d-4f58-a627-4b05e28f6a7c", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230", Pod:"coredns-674b8bbfcf-qwwfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a71a795c31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.592 [INFO][5761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.592 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" iface="eth0" netns="" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.592 [INFO][5761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.592 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.635 [INFO][5768] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.635 [INFO][5768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.635 [INFO][5768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.643 [WARNING][5768] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.643 [INFO][5768] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.645 [INFO][5768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.649789 containerd[1476]: 2025-09-12 17:38:11.647 [INFO][5761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.650299 containerd[1476]: time="2025-09-12T17:38:11.649835826Z" level=info msg="TearDown network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" successfully" Sep 12 17:38:11.650299 containerd[1476]: time="2025-09-12T17:38:11.649866386Z" level=info msg="StopPodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" returns successfully" Sep 12 17:38:11.650584 containerd[1476]: time="2025-09-12T17:38:11.650538902Z" level=info msg="RemovePodSandbox for \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\"" Sep 12 17:38:11.650675 containerd[1476]: time="2025-09-12T17:38:11.650595080Z" level=info msg="Forcibly stopping sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\"" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.701 [WARNING][5782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d0f02c4-567d-4f58-a627-4b05e28f6a7c", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"2f613af70e576a2665ebcfda015994cc10115212088db829a75c2ebe90c3e230", Pod:"coredns-674b8bbfcf-qwwfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a71a795c31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.702 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.702 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" iface="eth0" netns="" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.702 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.702 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.727 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.727 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.727 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.736 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.736 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" HandleID="k8s-pod-network.59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-coredns--674b8bbfcf--qwwfs-eth0" Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.738 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.743217 containerd[1476]: 2025-09-12 17:38:11.740 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768" Sep 12 17:38:11.743217 containerd[1476]: time="2025-09-12T17:38:11.742802897Z" level=info msg="TearDown network for sandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" successfully" Sep 12 17:38:11.761164 containerd[1476]: time="2025-09-12T17:38:11.761117372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:11.761629 containerd[1476]: time="2025-09-12T17:38:11.761347581Z" level=info msg="RemovePodSandbox \"59192ab299c9897f9622a66d191aae17342b664a422fd4cf19c4267235d74768\" returns successfully" Sep 12 17:38:11.762320 containerd[1476]: time="2025-09-12T17:38:11.762250734Z" level=info msg="StopPodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\"" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.810 [WARNING][5803] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0", GenerateName:"calico-kube-controllers-7b985f5889-", Namespace:"calico-system", SelfLink:"", UID:"beb3804a-cae2-4b48-8eca-eff5a936c3a3", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b985f5889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada", Pod:"calico-kube-controllers-7b985f5889-5kj7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidab8f63f20d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.810 [INFO][5803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.810 [INFO][5803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" iface="eth0" netns="" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.810 [INFO][5803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.810 [INFO][5803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.852 [INFO][5811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.852 [INFO][5811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.852 [INFO][5811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.865 [WARNING][5811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.865 [INFO][5811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.868 [INFO][5811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.877106 containerd[1476]: 2025-09-12 17:38:11.873 [INFO][5803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.877106 containerd[1476]: time="2025-09-12T17:38:11.877086484Z" level=info msg="TearDown network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" successfully" Sep 12 17:38:11.878031 containerd[1476]: time="2025-09-12T17:38:11.877123197Z" level=info msg="StopPodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" returns successfully" Sep 12 17:38:11.878031 containerd[1476]: time="2025-09-12T17:38:11.877926066Z" level=info msg="RemovePodSandbox for \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\"" Sep 12 17:38:11.878031 containerd[1476]: time="2025-09-12T17:38:11.877969761Z" level=info msg="Forcibly stopping sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\"" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.932 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0", GenerateName:"calico-kube-controllers-7b985f5889-", Namespace:"calico-system", SelfLink:"", UID:"beb3804a-cae2-4b48-8eca-eff5a936c3a3", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b985f5889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"d692827923385466e4b4b1f85b2a8216797355b2baeb27c072f0e6cec2fb0ada", Pod:"calico-kube-controllers-7b985f5889-5kj7c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidab8f63f20d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.932 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.932 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" iface="eth0" netns="" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.932 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.932 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.958 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.958 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.958 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.967 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.967 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" HandleID="k8s-pod-network.117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--kube--controllers--7b985f5889--5kj7c-eth0" Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.969 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:11.973971 containerd[1476]: 2025-09-12 17:38:11.971 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214" Sep 12 17:38:11.974683 containerd[1476]: time="2025-09-12T17:38:11.974035842Z" level=info msg="TearDown network for sandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" successfully" Sep 12 17:38:11.979541 containerd[1476]: time="2025-09-12T17:38:11.979487997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:11.979674 containerd[1476]: time="2025-09-12T17:38:11.979601629Z" level=info msg="RemovePodSandbox \"117ef75de6e7aa03ffc683a9756c740f3d0d8be6b73178e109ea0b4c54710214\" returns successfully" Sep 12 17:38:11.980294 containerd[1476]: time="2025-09-12T17:38:11.980266843Z" level=info msg="StopPodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\"" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.030 [WARNING][5848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c", Pod:"calico-apiserver-5d65df657f-s7j4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ed627ad9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.030 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.030 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" iface="eth0" netns="" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.030 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.030 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.062 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.063 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.063 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.072 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.072 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.074 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:12.080407 containerd[1476]: 2025-09-12 17:38:12.077 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.082905 containerd[1476]: time="2025-09-12T17:38:12.080460605Z" level=info msg="TearDown network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" successfully" Sep 12 17:38:12.082905 containerd[1476]: time="2025-09-12T17:38:12.080505426Z" level=info msg="StopPodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" returns successfully" Sep 12 17:38:12.082905 containerd[1476]: time="2025-09-12T17:38:12.081317209Z" level=info msg="RemovePodSandbox for \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\"" Sep 12 17:38:12.082905 containerd[1476]: time="2025-09-12T17:38:12.081359289Z" level=info msg="Forcibly stopping sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\"" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.130 [WARNING][5869] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0", GenerateName:"calico-apiserver-5d65df657f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 37, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d65df657f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c", Pod:"calico-apiserver-5d65df657f-s7j4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ed627ad9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.130 [INFO][5869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.130 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" iface="eth0" netns="" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.130 [INFO][5869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.130 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.158 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.158 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.158 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.168 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.168 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" HandleID="k8s-pod-network.c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.171 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:12.178643 containerd[1476]: 2025-09-12 17:38:12.175 [INFO][5869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6" Sep 12 17:38:12.180419 containerd[1476]: time="2025-09-12T17:38:12.178706915Z" level=info msg="TearDown network for sandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" successfully" Sep 12 17:38:12.183990 containerd[1476]: time="2025-09-12T17:38:12.183943785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:38:12.184140 containerd[1476]: time="2025-09-12T17:38:12.184092487Z" level=info msg="RemovePodSandbox \"c71982beab885aceb402826528f1d1dd592682219bb09c443fae6af5640dcac6\" returns successfully" Sep 12 17:38:12.458600 kubelet[2530]: I0912 17:38:12.458386 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:38:12.649912 kubelet[2530]: I0912 17:38:12.648523 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:38:12.693513 containerd[1476]: time="2025-09-12T17:38:12.691945319Z" level=info msg="StopContainer for \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\" with timeout 30 (s)" Sep 12 17:38:12.698016 containerd[1476]: time="2025-09-12T17:38:12.697965027Z" level=info msg="Stop container \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\" with signal terminated" Sep 12 17:38:12.797038 systemd[1]: cri-containerd-5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237.scope: Deactivated successfully. Sep 12 17:38:12.861942 systemd[1]: Created slice kubepods-besteffort-podd64f663b_605f_434d_81b3_263a26bc8131.slice - libcontainer container kubepods-besteffort-podd64f663b_605f_434d_81b3_263a26bc8131.slice. Sep 12 17:38:12.891346 kubelet[2530]: I0912 17:38:12.888975 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d64f663b-605f-434d-81b3-263a26bc8131-calico-apiserver-certs\") pod \"calico-apiserver-5479fdfc7-zkmjs\" (UID: \"d64f663b-605f-434d-81b3-263a26bc8131\") " pod="calico-apiserver/calico-apiserver-5479fdfc7-zkmjs" Sep 12 17:38:12.901334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237-rootfs.mount: Deactivated successfully. Sep 12 17:38:12.903762 kubelet[2530]: I0912 17:38:12.903735 2530 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqgml\" (UniqueName: \"kubernetes.io/projected/d64f663b-605f-434d-81b3-263a26bc8131-kube-api-access-qqgml\") pod \"calico-apiserver-5479fdfc7-zkmjs\" (UID: \"d64f663b-605f-434d-81b3-263a26bc8131\") " pod="calico-apiserver/calico-apiserver-5479fdfc7-zkmjs" Sep 12 17:38:12.908263 containerd[1476]: time="2025-09-12T17:38:12.898767568Z" level=info msg="shim disconnected" id=5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237 namespace=k8s.io Sep 12 17:38:12.920282 containerd[1476]: time="2025-09-12T17:38:12.920220350Z" level=warning msg="cleaning up after shim disconnected" id=5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237 namespace=k8s.io Sep 12 17:38:12.920282 containerd[1476]: time="2025-09-12T17:38:12.920265377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:12.986713 containerd[1476]: time="2025-09-12T17:38:12.986655091Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:38:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:38:12.991775 containerd[1476]: time="2025-09-12T17:38:12.991520425Z" level=info msg="StopContainer for \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\" returns successfully" Sep 12 17:38:12.996958 containerd[1476]: time="2025-09-12T17:38:12.996877033Z" level=info msg="StopPodSandbox for \"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba\"" Sep 12 17:38:12.998566 containerd[1476]: time="2025-09-12T17:38:12.998517985Z" level=info msg="Container to stop \"5a09ebbfb0fc26b6b0682d834c7e52a48cd6048d462e73eb8a4eb12c49183237\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:38:13.010176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba-shm.mount: Deactivated successfully. Sep 12 17:38:13.029318 systemd[1]: cri-containerd-90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba.scope: Deactivated successfully. Sep 12 17:38:13.119702 containerd[1476]: time="2025-09-12T17:38:13.118920261Z" level=info msg="shim disconnected" id=90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba namespace=k8s.io Sep 12 17:38:13.119702 containerd[1476]: time="2025-09-12T17:38:13.119640223Z" level=warning msg="cleaning up after shim disconnected" id=90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba namespace=k8s.io Sep 12 17:38:13.119702 containerd[1476]: time="2025-09-12T17:38:13.119656316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:13.121345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba-rootfs.mount: Deactivated successfully. Sep 12 17:38:13.186905 containerd[1476]: time="2025-09-12T17:38:13.186766058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5479fdfc7-zkmjs,Uid:d64f663b-605f-434d-81b3-263a26bc8131,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:38:13.318554 systemd-networkd[1374]: calib19233d6540: Link DOWN Sep 12 17:38:13.318561 systemd-networkd[1374]: calib19233d6540: Lost carrier Sep 12 17:38:13.372512 kubelet[2530]: I0912 17:38:13.372110 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.292 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.292 [INFO][5959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" iface="eth0" netns="/var/run/netns/cni-27f0416c-6259-b0b3-e02d-eec9bb4fc08a" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.293 [INFO][5959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" iface="eth0" netns="/var/run/netns/cni-27f0416c-6259-b0b3-e02d-eec9bb4fc08a" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.323 [INFO][5959] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" after=30.940827ms iface="eth0" netns="/var/run/netns/cni-27f0416c-6259-b0b3-e02d-eec9bb4fc08a" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.323 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.323 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.404 [INFO][5977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" HandleID="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.404 [INFO][5977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.404 [INFO][5977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.499 [INFO][5977] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" HandleID="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.499 [INFO][5977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" HandleID="k8s-pod-network.90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--lctzq-eth0" Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.501 [INFO][5977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:13.512944 containerd[1476]: 2025-09-12 17:38:13.507 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba" Sep 12 17:38:13.514349 containerd[1476]: time="2025-09-12T17:38:13.514165743Z" level=info msg="TearDown network for sandbox \"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba\" successfully" Sep 12 17:38:13.514349 containerd[1476]: time="2025-09-12T17:38:13.514202611Z" level=info msg="StopPodSandbox for \"90de235bdbc76e62962ef31254a93c8c7c4f69ec818975e13a74d5267a82f6ba\" returns successfully" Sep 12 17:38:13.590739 systemd-networkd[1374]: calie2327f182d3: Link UP Sep 12 17:38:13.593276 systemd-networkd[1374]: calie2327f182d3: Gained carrier Sep 12 17:38:13.622358 kubelet[2530]: I0912 17:38:13.616632 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxf9f\" (UniqueName: \"kubernetes.io/projected/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-kube-api-access-kxf9f\") pod \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\" (UID: \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\") " Sep 12 17:38:13.622358 kubelet[2530]: I0912 17:38:13.616679 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-calico-apiserver-certs\") pod \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\" (UID: \"ff40a270-b2d1-4d12-8ac7-f6f4cae38a71\") " Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.373 [INFO][5964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0 calico-apiserver-5479fdfc7- calico-apiserver d64f663b-605f-434d-81b3-263a26bc8131 1241 0 2025-09-12 17:38:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5479fdfc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-9-b554e4f7b0 calico-apiserver-5479fdfc7-zkmjs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2327f182d3 [] [] }} ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.375 [INFO][5964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.492 [INFO][5988] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" HandleID="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.493 [INFO][5988] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" HandleID="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-9-b554e4f7b0", "pod":"calico-apiserver-5479fdfc7-zkmjs", "timestamp":"2025-09-12 17:38:13.492726441 +0000 UTC"}, Hostname:"ci-4081.3.6-9-b554e4f7b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.494 [INFO][5988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.501 [INFO][5988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.502 [INFO][5988] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-b554e4f7b0' Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.521 [INFO][5988] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.532 [INFO][5988] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.542 [INFO][5988] ipam/ipam.go 511: Trying affinity for 192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.546 [INFO][5988] ipam/ipam.go 158: Attempting to load block cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.552 [INFO][5988] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.96.128/26 host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.552 [INFO][5988] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.96.128/26 handle="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.560 [INFO][5988] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255 Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.568 [INFO][5988] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.96.128/26 handle="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.579 [INFO][5988] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.96.138/26] block=192.168.96.128/26 handle="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.579 [INFO][5988] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.96.138/26] handle="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" host="ci-4081.3.6-9-b554e4f7b0" Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.579 [INFO][5988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:13.624910 containerd[1476]: 2025-09-12 17:38:13.579 [INFO][5988] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.96.138/26] IPv6=[] ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" HandleID="k8s-pod-network.069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.628796 containerd[1476]: 2025-09-12 17:38:13.582 [INFO][5964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0", GenerateName:"calico-apiserver-5479fdfc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d64f663b-605f-434d-81b3-263a26bc8131", ResourceVersion:"1241", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5479fdfc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"", Pod:"calico-apiserver-5479fdfc7-zkmjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2327f182d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:13.628796 containerd[1476]: 2025-09-12 17:38:13.583 [INFO][5964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.138/32] ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.628796 containerd[1476]: 2025-09-12 17:38:13.583 [INFO][5964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2327f182d3 ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.628796 containerd[1476]: 2025-09-12 17:38:13.594 [INFO][5964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.628796 containerd[1476]: 2025-09-12 17:38:13.595 [INFO][5964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0", GenerateName:"calico-apiserver-5479fdfc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d64f663b-605f-434d-81b3-263a26bc8131", ResourceVersion:"1241", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 38, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5479fdfc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-b554e4f7b0", ContainerID:"069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255", Pod:"calico-apiserver-5479fdfc7-zkmjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2327f182d3", MAC:"3e:96:b4:30:94:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:38:13.628796 containerd[1476]: 2025-09-12 17:38:13.610 [INFO][5964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255" Namespace="calico-apiserver" Pod="calico-apiserver-5479fdfc7-zkmjs" WorkloadEndpoint="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5479fdfc7--zkmjs-eth0" Sep 12 17:38:13.661490 kubelet[2530]: I0912 17:38:13.648931 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-kube-api-access-kxf9f" (OuterVolumeSpecName: "kube-api-access-kxf9f") pod "ff40a270-b2d1-4d12-8ac7-f6f4cae38a71" (UID: "ff40a270-b2d1-4d12-8ac7-f6f4cae38a71"). InnerVolumeSpecName "kube-api-access-kxf9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:38:13.661490 kubelet[2530]: I0912 17:38:13.648535 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ff40a270-b2d1-4d12-8ac7-f6f4cae38a71" (UID: "ff40a270-b2d1-4d12-8ac7-f6f4cae38a71"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:38:13.680779 containerd[1476]: time="2025-09-12T17:38:13.680621497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:38:13.680779 containerd[1476]: time="2025-09-12T17:38:13.680678024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:38:13.680779 containerd[1476]: time="2025-09-12T17:38:13.680719501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:13.681062 containerd[1476]: time="2025-09-12T17:38:13.680831753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:38:13.717881 kubelet[2530]: I0912 17:38:13.717691 2530 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kxf9f\" (UniqueName: \"kubernetes.io/projected/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-kube-api-access-kxf9f\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:38:13.717881 kubelet[2530]: I0912 17:38:13.717732 2530 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71-calico-apiserver-certs\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:38:13.720748 systemd[1]: Started cri-containerd-069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255.scope - libcontainer container 069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255. Sep 12 17:38:13.787103 containerd[1476]: time="2025-09-12T17:38:13.786984993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5479fdfc7-zkmjs,Uid:d64f663b-605f-434d-81b3-263a26bc8131,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255\"" Sep 12 17:38:13.816822 containerd[1476]: time="2025-09-12T17:38:13.816687522Z" level=info msg="CreateContainer within sandbox \"069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:38:13.826013 containerd[1476]: time="2025-09-12T17:38:13.825876708Z" level=info msg="CreateContainer within sandbox \"069e8ce500f7ee441b1a3f634f1092dfc1f4426a60986d485955309c60981255\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a76459ba7cdfba0a1c241886eb0abb4b420131be380a27503952249b910129fa\"" Sep 12 17:38:13.826779 containerd[1476]: time="2025-09-12T17:38:13.826756869Z" level=info msg="StartContainer for \"a76459ba7cdfba0a1c241886eb0abb4b420131be380a27503952249b910129fa\"" Sep 12 17:38:13.860734 systemd[1]: Started cri-containerd-a76459ba7cdfba0a1c241886eb0abb4b420131be380a27503952249b910129fa.scope - libcontainer container a76459ba7cdfba0a1c241886eb0abb4b420131be380a27503952249b910129fa. Sep 12 17:38:13.898355 systemd[1]: run-netns-cni\x2d27f0416c\x2d6259\x2db0b3\x2de02d\x2deec9bb4fc08a.mount: Deactivated successfully. Sep 12 17:38:13.899431 systemd[1]: var-lib-kubelet-pods-ff40a270\x2db2d1\x2d4d12\x2d8ac7\x2df6f4cae38a71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxf9f.mount: Deactivated successfully. Sep 12 17:38:13.899830 systemd[1]: var-lib-kubelet-pods-ff40a270\x2db2d1\x2d4d12\x2d8ac7\x2df6f4cae38a71-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:38:13.933674 containerd[1476]: time="2025-09-12T17:38:13.933601588Z" level=info msg="StartContainer for \"a76459ba7cdfba0a1c241886eb0abb4b420131be380a27503952249b910129fa\" returns successfully" Sep 12 17:38:14.435626 systemd[1]: Removed slice kubepods-besteffort-podff40a270_b2d1_4d12_8ac7_f6f4cae38a71.slice - libcontainer container kubepods-besteffort-podff40a270_b2d1_4d12_8ac7_f6f4cae38a71.slice. Sep 12 17:38:14.466030 kubelet[2530]: I0912 17:38:14.465952 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5479fdfc7-zkmjs" podStartSLOduration=2.46374472 podStartE2EDuration="2.46374472s" podCreationTimestamp="2025-09-12 17:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:38:14.396164965 +0000 UTC m=+65.025843182" watchObservedRunningTime="2025-09-12 17:38:14.46374472 +0000 UTC m=+65.093422915" Sep 12 17:38:14.733873 systemd[1]: Started sshd@9-159.223.198.129:22-147.75.109.163:52708.service - OpenSSH per-connection server daemon (147.75.109.163:52708). Sep 12 17:38:14.959226 sshd[6091]: Accepted publickey for core from 147.75.109.163 port 52708 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:14.966177 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:14.980277 systemd-logind[1447]: New session 10 of user core. Sep 12 17:38:14.985494 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:38:15.451492 systemd-networkd[1374]: calie2327f182d3: Gained IPv6LL Sep 12 17:38:15.559936 kubelet[2530]: I0912 17:38:15.559842 2530 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff40a270-b2d1-4d12-8ac7-f6f4cae38a71" path="/var/lib/kubelet/pods/ff40a270-b2d1-4d12-8ac7-f6f4cae38a71/volumes" Sep 12 17:38:15.818152 containerd[1476]: time="2025-09-12T17:38:15.817980846Z" level=info msg="StopContainer for \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\" with timeout 30 (s)" Sep 12 17:38:15.822035 containerd[1476]: time="2025-09-12T17:38:15.821998041Z" level=info msg="Stop container \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\" with signal terminated" Sep 12 17:38:15.885905 systemd[1]: cri-containerd-1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12.scope: Deactivated successfully. Sep 12 17:38:15.886641 systemd[1]: cri-containerd-1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12.scope: Consumed 1.090s CPU time. Sep 12 17:38:15.959415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12-rootfs.mount: Deactivated successfully. Sep 12 17:38:15.963129 containerd[1476]: time="2025-09-12T17:38:15.962941023Z" level=info msg="shim disconnected" id=1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12 namespace=k8s.io Sep 12 17:38:15.965822 containerd[1476]: time="2025-09-12T17:38:15.965581326Z" level=warning msg="cleaning up after shim disconnected" id=1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12 namespace=k8s.io Sep 12 17:38:15.965822 containerd[1476]: time="2025-09-12T17:38:15.965624028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:16.012101 sshd[6091]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:16.014552 containerd[1476]: time="2025-09-12T17:38:16.012166606Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:38:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:38:16.029430 systemd[1]: sshd@9-159.223.198.129:22-147.75.109.163:52708.service: Deactivated successfully. Sep 12 17:38:16.033814 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:38:16.036262 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:38:16.046172 systemd[1]: Started sshd@10-159.223.198.129:22-147.75.109.163:52714.service - OpenSSH per-connection server daemon (147.75.109.163:52714). Sep 12 17:38:16.055046 systemd-logind[1447]: Removed session 10. Sep 12 17:38:16.057942 containerd[1476]: time="2025-09-12T17:38:16.025135224Z" level=info msg="StopContainer for \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\" returns successfully" Sep 12 17:38:16.059728 containerd[1476]: time="2025-09-12T17:38:16.059693168Z" level=info msg="StopPodSandbox for \"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c\"" Sep 12 17:38:16.059840 containerd[1476]: time="2025-09-12T17:38:16.059760156Z" level=info msg="Container to stop \"1a07a6045f67b95c25078ad2a192ff740bfd139581b5a5c41d6e2864793f3a12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:38:16.073421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c-shm.mount: Deactivated successfully. Sep 12 17:38:16.075608 systemd[1]: cri-containerd-147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c.scope: Deactivated successfully. Sep 12 17:38:16.124799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c-rootfs.mount: Deactivated successfully. Sep 12 17:38:16.130838 containerd[1476]: time="2025-09-12T17:38:16.128951328Z" level=info msg="shim disconnected" id=147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c namespace=k8s.io Sep 12 17:38:16.130838 containerd[1476]: time="2025-09-12T17:38:16.129017566Z" level=warning msg="cleaning up after shim disconnected" id=147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c namespace=k8s.io Sep 12 17:38:16.130838 containerd[1476]: time="2025-09-12T17:38:16.129027173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:38:16.188078 sshd[6168]: Accepted publickey for core from 147.75.109.163 port 52714 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:16.190378 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:16.200287 systemd-logind[1447]: New session 11 of user core. Sep 12 17:38:16.207862 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:38:16.244946 systemd-networkd[1374]: calid4ed627ad9e: Link DOWN Sep 12 17:38:16.244957 systemd-networkd[1374]: calid4ed627ad9e: Lost carrier Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.241 [INFO][6210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.241 [INFO][6210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" iface="eth0" netns="/var/run/netns/cni-000477c0-6bab-0bd3-c273-7c06769c4705" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.242 [INFO][6210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" iface="eth0" netns="/var/run/netns/cni-000477c0-6bab-0bd3-c273-7c06769c4705" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.254 [INFO][6210] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" after=12.49119ms iface="eth0" netns="/var/run/netns/cni-000477c0-6bab-0bd3-c273-7c06769c4705" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.254 [INFO][6210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.254 [INFO][6210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.331 [INFO][6221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" HandleID="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.331 [INFO][6221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.331 [INFO][6221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.392 [INFO][6221] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" HandleID="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.392 [INFO][6221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" HandleID="k8s-pod-network.147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Workload="ci--4081.3.6--9--b554e4f7b0-k8s-calico--apiserver--5d65df657f--s7j4v-eth0" Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.396 [INFO][6221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:38:16.413575 containerd[1476]: 2025-09-12 17:38:16.403 [INFO][6210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Sep 12 17:38:16.413575 containerd[1476]: time="2025-09-12T17:38:16.412621278Z" level=info msg="TearDown network for sandbox \"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c\" successfully" Sep 12 17:38:16.413575 containerd[1476]: time="2025-09-12T17:38:16.412712947Z" level=info msg="StopPodSandbox for \"147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c\" returns successfully" Sep 12 17:38:16.424094 systemd[1]: run-netns-cni\x2d000477c0\x2d6bab\x2d0bd3\x2dc273\x2d7c06769c4705.mount: Deactivated successfully. Sep 12 17:38:16.455031 kubelet[2530]: I0912 17:38:16.454593 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="147fb211538321d866013e7844be8fed05a0233dd7c126ee44316ea731fa821c" Sep 12 17:38:16.482823 sshd[6168]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:16.499867 systemd[1]: sshd@10-159.223.198.129:22-147.75.109.163:52714.service: Deactivated successfully. Sep 12 17:38:16.507965 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:38:16.512277 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:38:16.522048 systemd[1]: Started sshd@11-159.223.198.129:22-147.75.109.163:52716.service - OpenSSH per-connection server daemon (147.75.109.163:52716). Sep 12 17:38:16.527420 systemd-logind[1447]: Removed session 11. Sep 12 17:38:16.560617 kubelet[2530]: I0912 17:38:16.557213 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnkmx\" (UniqueName: \"kubernetes.io/projected/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-kube-api-access-bnkmx\") pod \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\" (UID: \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\") " Sep 12 17:38:16.560617 kubelet[2530]: I0912 17:38:16.557336 2530 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-calico-apiserver-certs\") pod \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\" (UID: \"ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda\") " Sep 12 17:38:16.579730 systemd[1]: var-lib-kubelet-pods-ef8fc9fb\x2dbcd9\x2d488a\x2db1f1\x2dc6d7bb78dfda-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbnkmx.mount: Deactivated successfully. Sep 12 17:38:16.583774 kubelet[2530]: I0912 17:38:16.583345 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-kube-api-access-bnkmx" (OuterVolumeSpecName: "kube-api-access-bnkmx") pod "ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda" (UID: "ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda"). InnerVolumeSpecName "kube-api-access-bnkmx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:38:16.589010 kubelet[2530]: I0912 17:38:16.588956 2530 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda" (UID: "ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:38:16.602182 sshd[6238]: Accepted publickey for core from 147.75.109.163 port 52716 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:16.604084 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:16.609438 systemd-logind[1447]: New session 12 of user core. Sep 12 17:38:16.614697 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:38:16.660020 kubelet[2530]: I0912 17:38:16.659943 2530 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-calico-apiserver-certs\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:38:16.660020 kubelet[2530]: I0912 17:38:16.659989 2530 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bnkmx\" (UniqueName: \"kubernetes.io/projected/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda-kube-api-access-bnkmx\") on node \"ci-4081.3.6-9-b554e4f7b0\" DevicePath \"\"" Sep 12 17:38:16.766558 sshd[6238]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:16.774067 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:38:16.774884 systemd[1]: sshd@11-159.223.198.129:22-147.75.109.163:52716.service: Deactivated successfully. Sep 12 17:38:16.779887 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:38:16.784813 systemd-logind[1447]: Removed session 12. Sep 12 17:38:16.959508 systemd[1]: var-lib-kubelet-pods-ef8fc9fb\x2dbcd9\x2d488a\x2db1f1\x2dc6d7bb78dfda-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:38:17.468131 systemd[1]: Removed slice kubepods-besteffort-podef8fc9fb_bcd9_488a_b1f1_c6d7bb78dfda.slice - libcontainer container kubepods-besteffort-podef8fc9fb_bcd9_488a_b1f1_c6d7bb78dfda.slice. Sep 12 17:38:17.468832 systemd[1]: kubepods-besteffort-podef8fc9fb_bcd9_488a_b1f1_c6d7bb78dfda.slice: Consumed 1.123s CPU time. Sep 12 17:38:17.577778 kubelet[2530]: I0912 17:38:17.577731 2530 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda" path="/var/lib/kubelet/pods/ef8fc9fb-bcd9-488a-b1f1-c6d7bb78dfda/volumes" Sep 12 17:38:21.528567 kubelet[2530]: E0912 17:38:21.521346 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:38:21.784878 systemd[1]: Started sshd@12-159.223.198.129:22-147.75.109.163:48052.service - OpenSSH per-connection server daemon (147.75.109.163:48052). Sep 12 17:38:21.876309 sshd[6262]: Accepted publickey for core from 147.75.109.163 port 48052 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:21.878280 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:21.886588 systemd-logind[1447]: New session 13 of user core. Sep 12 17:38:21.894152 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:38:22.090086 sshd[6262]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:22.098040 systemd[1]: sshd@12-159.223.198.129:22-147.75.109.163:48052.service: Deactivated successfully. Sep 12 17:38:22.101836 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:38:22.104326 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:38:22.106643 systemd-logind[1447]: Removed session 13. Sep 12 17:38:27.109990 systemd[1]: Started sshd@13-159.223.198.129:22-147.75.109.163:48058.service - OpenSSH per-connection server daemon (147.75.109.163:48058). Sep 12 17:38:27.195034 sshd[6284]: Accepted publickey for core from 147.75.109.163 port 48058 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:27.197556 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:27.205852 systemd-logind[1447]: New session 14 of user core. Sep 12 17:38:27.211689 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:38:27.373352 sshd[6284]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:27.379447 systemd[1]: sshd@13-159.223.198.129:22-147.75.109.163:48058.service: Deactivated successfully. Sep 12 17:38:27.383489 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:38:27.385402 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:38:27.387409 systemd-logind[1447]: Removed session 14. Sep 12 17:38:32.400052 systemd[1]: Started sshd@14-159.223.198.129:22-147.75.109.163:55742.service - OpenSSH per-connection server daemon (147.75.109.163:55742). Sep 12 17:38:32.524655 sshd[6315]: Accepted publickey for core from 147.75.109.163 port 55742 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:32.526992 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:32.535556 systemd-logind[1447]: New session 15 of user core. Sep 12 17:38:32.541732 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:38:32.891036 sshd[6315]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:32.896448 systemd[1]: sshd@14-159.223.198.129:22-147.75.109.163:55742.service: Deactivated successfully. Sep 12 17:38:32.899275 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:38:32.900406 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:38:32.901907 systemd-logind[1447]: Removed session 15. Sep 12 17:38:35.550388 kubelet[2530]: E0912 17:38:35.550238 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:38:37.914942 systemd[1]: Started sshd@15-159.223.198.129:22-147.75.109.163:55752.service - OpenSSH per-connection server daemon (147.75.109.163:55752). Sep 12 17:38:38.010396 sshd[6330]: Accepted publickey for core from 147.75.109.163 port 55752 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:38.016884 sshd[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:38.037561 systemd-logind[1447]: New session 16 of user core. Sep 12 17:38:38.042794 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:38:38.209878 sshd[6330]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:38.214952 systemd[1]: sshd@15-159.223.198.129:22-147.75.109.163:55752.service: Deactivated successfully. Sep 12 17:38:38.217162 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:38:38.217900 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:38:38.219715 systemd-logind[1447]: Removed session 16. Sep 12 17:38:39.520929 kubelet[2530]: E0912 17:38:39.520569 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:38:41.068902 systemd[1]: run-containerd-runc-k8s.io-24e7e7f7d1daf620ddee4b1979a146b136e7f204a59b6dc4c1a3ffb90272a4ff-runc.oY8oqY.mount: Deactivated successfully. Sep 12 17:38:43.232952 systemd[1]: Started sshd@16-159.223.198.129:22-147.75.109.163:50336.service - OpenSSH per-connection server daemon (147.75.109.163:50336). Sep 12 17:38:43.330554 sshd[6383]: Accepted publickey for core from 147.75.109.163 port 50336 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:43.332859 sshd[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:43.342011 systemd-logind[1447]: New session 17 of user core. Sep 12 17:38:43.349804 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:38:43.721484 sshd[6383]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:43.737348 systemd[1]: Started sshd@17-159.223.198.129:22-147.75.109.163:50350.service - OpenSSH per-connection server daemon (147.75.109.163:50350). Sep 12 17:38:43.738847 systemd[1]: sshd@16-159.223.198.129:22-147.75.109.163:50336.service: Deactivated successfully. Sep 12 17:38:43.744242 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:38:43.748505 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:38:43.750240 systemd-logind[1447]: Removed session 17. Sep 12 17:38:43.792618 sshd[6395]: Accepted publickey for core from 147.75.109.163 port 50350 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:43.794701 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:43.802942 systemd-logind[1447]: New session 18 of user core. Sep 12 17:38:43.805717 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:38:44.138555 sshd[6395]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:44.148026 systemd[1]: sshd@17-159.223.198.129:22-147.75.109.163:50350.service: Deactivated successfully. Sep 12 17:38:44.150293 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:38:44.152386 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:38:44.159000 systemd[1]: Started sshd@18-159.223.198.129:22-147.75.109.163:50356.service - OpenSSH per-connection server daemon (147.75.109.163:50356). Sep 12 17:38:44.163855 systemd-logind[1447]: Removed session 18. Sep 12 17:38:44.235940 sshd[6408]: Accepted publickey for core from 147.75.109.163 port 50356 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:44.238700 sshd[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:44.247315 systemd-logind[1447]: New session 19 of user core. Sep 12 17:38:44.253813 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:38:45.300831 sshd[6408]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:45.333670 systemd[1]: sshd@18-159.223.198.129:22-147.75.109.163:50356.service: Deactivated successfully. Sep 12 17:38:45.338343 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:38:45.342665 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:38:45.361561 systemd[1]: Started sshd@19-159.223.198.129:22-147.75.109.163:50364.service - OpenSSH per-connection server daemon (147.75.109.163:50364). Sep 12 17:38:45.362370 systemd-logind[1447]: Removed session 19. Sep 12 17:38:45.495500 sshd[6449]: Accepted publickey for core from 147.75.109.163 port 50364 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:45.498406 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:45.507718 systemd-logind[1447]: New session 20 of user core. Sep 12 17:38:45.511715 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:38:46.321385 sshd[6449]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:46.339175 systemd[1]: sshd@19-159.223.198.129:22-147.75.109.163:50364.service: Deactivated successfully. Sep 12 17:38:46.346955 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:38:46.349209 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:38:46.354000 systemd-logind[1447]: Removed session 20. Sep 12 17:38:46.362056 systemd[1]: Started sshd@20-159.223.198.129:22-147.75.109.163:50374.service - OpenSSH per-connection server daemon (147.75.109.163:50374). Sep 12 17:38:46.415274 sshd[6462]: Accepted publickey for core from 147.75.109.163 port 50374 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:46.417619 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:46.423343 systemd-logind[1447]: New session 21 of user core. Sep 12 17:38:46.430955 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:38:46.512031 systemd[1]: run-containerd-runc-k8s.io-60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42-runc.pWSBnj.mount: Deactivated successfully. Sep 12 17:38:46.546244 kubelet[2530]: E0912 17:38:46.545527 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:38:46.655643 sshd[6462]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:46.661747 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:38:46.662220 systemd[1]: sshd@20-159.223.198.129:22-147.75.109.163:50374.service: Deactivated successfully. Sep 12 17:38:46.665999 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:38:46.669462 systemd-logind[1447]: Removed session 21. Sep 12 17:38:47.517844 kubelet[2530]: E0912 17:38:47.517257 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:38:51.676080 systemd[1]: Started sshd@21-159.223.198.129:22-147.75.109.163:58360.service - OpenSSH per-connection server daemon (147.75.109.163:58360). Sep 12 17:38:51.737231 sshd[6496]: Accepted publickey for core from 147.75.109.163 port 58360 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:51.739244 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:51.748290 systemd-logind[1447]: New session 22 of user core. Sep 12 17:38:51.757904 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:38:51.968918 sshd[6496]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:51.977931 systemd[1]: sshd@21-159.223.198.129:22-147.75.109.163:58360.service: Deactivated successfully. Sep 12 17:38:51.980577 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:38:51.981432 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:38:51.984120 systemd-logind[1447]: Removed session 22. Sep 12 17:38:56.987665 systemd[1]: Started sshd@22-159.223.198.129:22-147.75.109.163:58362.service - OpenSSH per-connection server daemon (147.75.109.163:58362). Sep 12 17:38:57.085739 sshd[6511]: Accepted publickey for core from 147.75.109.163 port 58362 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:57.090862 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:57.098328 systemd-logind[1447]: New session 23 of user core. Sep 12 17:38:57.101667 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:38:57.534753 sshd[6511]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:57.543099 systemd[1]: sshd@22-159.223.198.129:22-147.75.109.163:58362.service: Deactivated successfully. Sep 12 17:38:57.548988 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:38:57.555116 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:38:57.559830 systemd-logind[1447]: Removed session 23. Sep 12 17:39:01.519125 kubelet[2530]: E0912 17:39:01.518857 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:02.134518 systemd[1]: run-containerd-runc-k8s.io-60b9432c80afe62090a7517ada9c552b63db15cd249111aec5ef87d9b7a76f42-runc.r1VT47.mount: Deactivated successfully. Sep 12 17:39:02.546079 systemd[1]: Started sshd@23-159.223.198.129:22-147.75.109.163:59518.service - OpenSSH per-connection server daemon (147.75.109.163:59518). Sep 12 17:39:02.665816 sshd[6544]: Accepted publickey for core from 147.75.109.163 port 59518 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:02.668673 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:02.679657 systemd-logind[1447]: New session 24 of user core. Sep 12 17:39:02.688446 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:39:03.122213 sshd[6544]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:03.129168 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:39:03.131249 systemd[1]: sshd@23-159.223.198.129:22-147.75.109.163:59518.service: Deactivated successfully. Sep 12 17:39:03.136986 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:39:03.142997 systemd-logind[1447]: Removed session 24.