Sep 13 00:06:01.894149 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:06:01.894173 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:01.894185 kernel: BIOS-provided physical RAM map: Sep 13 00:06:01.894192 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:06:01.894199 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:06:01.894205 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:06:01.894213 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:06:01.894220 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:06:01.894227 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:06:01.894237 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:06:01.894244 kernel: NX (Execute Disable) protection: active Sep 13 00:06:01.894251 kernel: APIC: Static calls initialized Sep 13 00:06:01.894262 kernel: SMBIOS 2.8 present. Sep 13 00:06:01.894269 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:06:01.894278 kernel: Hypervisor detected: KVM Sep 13 00:06:01.894289 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:06:01.894299 kernel: kvm-clock: using sched offset of 2841245350 cycles Sep 13 00:06:01.894308 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:06:01.894316 kernel: tsc: Detected 2494.140 MHz processor Sep 13 00:06:01.894324 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:06:01.894332 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:06:01.894340 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:06:01.894348 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:06:01.894356 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:06:01.894367 kernel: ACPI: Early table checksum verification disabled Sep 13 00:06:01.894375 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:06:01.894383 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894391 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894398 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894406 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:06:01.894414 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894422 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894430 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894440 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:06:01.894448 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:06:01.894456 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:06:01.894464 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:06:01.894471 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:06:01.894479 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:06:01.894487 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:06:01.894501 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:06:01.894509 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:06:01.894517 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:06:01.894526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:06:01.894534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:06:01.894545 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:06:01.894561 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:06:01.894576 kernel: Zone ranges: Sep 13 00:06:01.894594 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:06:01.894615 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:06:01.894631 kernel: Normal empty Sep 13 00:06:01.894639 kernel: Movable zone start for each node Sep 13 00:06:01.894648 kernel: Early memory node ranges Sep 13 00:06:01.894656 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:06:01.894664 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:06:01.894673 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:06:01.895421 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:06:01.895432 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:06:01.895443 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:06:01.895453 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:06:01.895461 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:06:01.895470 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:06:01.895478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:06:01.895487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:06:01.895495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:06:01.895507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:06:01.895515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:06:01.895524 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:06:01.895532 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:06:01.895541 kernel: TSC deadline timer available Sep 13 00:06:01.895550 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:06:01.895558 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:06:01.895566 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:06:01.895577 kernel: Booting paravirtualized kernel on KVM Sep 13 00:06:01.895586 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:06:01.895598 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:06:01.895607 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:06:01.895615 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:06:01.895624 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:06:01.895632 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:06:01.895642 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:01.895651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:06:01.895660 kernel: random: crng init done Sep 13 00:06:01.895671 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:06:01.895690 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:06:01.895698 kernel: Fallback order for Node 0: 0 Sep 13 00:06:01.895707 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:06:01.895715 kernel: Policy zone: DMA32 Sep 13 00:06:01.895724 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:06:01.895733 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:06:01.895741 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:06:01.895759 kernel: Kernel/User page tables isolation: enabled Sep 13 00:06:01.895768 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:06:01.895776 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:06:01.895785 kernel: Dynamic Preempt: voluntary Sep 13 00:06:01.895793 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:06:01.895802 kernel: rcu: RCU event tracing is enabled. Sep 13 00:06:01.895811 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:06:01.895819 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:06:01.895828 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:06:01.895836 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:06:01.895848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:06:01.895856 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:06:01.895865 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:06:01.895873 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:06:01.895884 kernel: Console: colour VGA+ 80x25 Sep 13 00:06:01.895893 kernel: printk: console [tty0] enabled Sep 13 00:06:01.895902 kernel: printk: console [ttyS0] enabled Sep 13 00:06:01.895910 kernel: ACPI: Core revision 20230628 Sep 13 00:06:01.895919 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:06:01.895930 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:06:01.895938 kernel: x2apic enabled Sep 13 00:06:01.895947 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:06:01.895955 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:06:01.895964 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:06:01.895972 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 13 00:06:01.895981 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:06:01.895989 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:06:01.896010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:06:01.896019 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:06:01.896028 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:06:01.896040 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:06:01.896049 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:06:01.896058 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:06:01.896066 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:06:01.896075 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:06:01.896084 kernel: active return thunk: its_return_thunk Sep 13 00:06:01.896098 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:06:01.896107 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:06:01.896116 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:06:01.896125 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:06:01.896134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:06:01.896143 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:06:01.896152 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:06:01.896161 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:06:01.896172 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:06:01.896182 kernel: landlock: Up and running. Sep 13 00:06:01.896191 kernel: SELinux: Initializing. Sep 13 00:06:01.896200 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:01.896212 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:06:01.896232 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:06:01.896252 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:01.896272 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:01.896287 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:06:01.896299 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:06:01.896308 kernel: signal: max sigframe size: 1776 Sep 13 00:06:01.896317 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:06:01.896326 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:06:01.896335 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:06:01.896344 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:06:01.896353 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:06:01.896362 kernel: .... node #0, CPUs: #1 Sep 13 00:06:01.896373 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:06:01.896386 kernel: smpboot: Max logical packages: 1 Sep 13 00:06:01.896395 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 13 00:06:01.896404 kernel: devtmpfs: initialized Sep 13 00:06:01.896413 kernel: x86/mm: Memory block size: 128MB Sep 13 00:06:01.896421 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:06:01.896431 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:06:01.896439 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:06:01.896448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:06:01.896460 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:06:01.896478 kernel: audit: type=2000 audit(1757721960.971:1): state=initialized audit_enabled=0 res=1 Sep 13 00:06:01.896488 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:06:01.896497 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:06:01.896505 kernel: cpuidle: using governor menu Sep 13 00:06:01.896515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:06:01.896524 kernel: dca service started, version 1.12.1 Sep 13 00:06:01.896533 kernel: PCI: Using configuration type 1 for base access Sep 13 00:06:01.896542 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:06:01.896550 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:06:01.896563 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:06:01.896573 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:06:01.896582 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:06:01.896591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:06:01.896600 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:06:01.896609 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:06:01.896618 kernel: ACPI: Interpreter enabled Sep 13 00:06:01.896626 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:06:01.896635 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:06:01.896647 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:06:01.896656 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:06:01.896665 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:06:01.896674 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:06:01.898006 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:06:01.898115 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:06:01.898212 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:06:01.898229 kernel: acpiphp: Slot [3] registered Sep 13 00:06:01.898239 kernel: acpiphp: Slot [4] registered Sep 13 00:06:01.898248 kernel: acpiphp: Slot [5] registered Sep 13 00:06:01.898257 kernel: acpiphp: Slot [6] registered Sep 13 00:06:01.898266 kernel: acpiphp: Slot [7] registered Sep 13 00:06:01.898275 kernel: acpiphp: Slot [8] registered Sep 13 00:06:01.898284 kernel: acpiphp: Slot [9] registered Sep 13 00:06:01.898293 kernel: acpiphp: Slot [10] registered Sep 13 00:06:01.898302 kernel: acpiphp: Slot [11] registered Sep 13 00:06:01.898311 kernel: acpiphp: Slot [12] registered Sep 13 00:06:01.898324 kernel: acpiphp: Slot [13] registered Sep 13 00:06:01.898333 kernel: acpiphp: Slot [14] registered Sep 13 00:06:01.898342 kernel: acpiphp: Slot [15] registered Sep 13 00:06:01.898351 kernel: acpiphp: Slot [16] registered Sep 13 00:06:01.898359 kernel: acpiphp: Slot [17] registered Sep 13 00:06:01.898368 kernel: acpiphp: Slot [18] registered Sep 13 00:06:01.898377 kernel: acpiphp: Slot [19] registered Sep 13 00:06:01.898386 kernel: acpiphp: Slot [20] registered Sep 13 00:06:01.898395 kernel: acpiphp: Slot [21] registered Sep 13 00:06:01.898407 kernel: acpiphp: Slot [22] registered Sep 13 00:06:01.898416 kernel: acpiphp: Slot [23] registered Sep 13 00:06:01.898425 kernel: acpiphp: Slot [24] registered Sep 13 00:06:01.898433 kernel: acpiphp: Slot [25] registered Sep 13 00:06:01.898442 kernel: acpiphp: Slot [26] registered Sep 13 00:06:01.898451 kernel: acpiphp: Slot [27] registered Sep 13 00:06:01.898460 kernel: acpiphp: Slot [28] registered Sep 13 00:06:01.898469 kernel: acpiphp: Slot [29] registered Sep 13 00:06:01.898478 kernel: acpiphp: Slot [30] registered Sep 13 00:06:01.898487 kernel: acpiphp: Slot [31] registered Sep 13 00:06:01.898499 kernel: PCI host bridge to bus 0000:00 Sep 13 00:06:01.898605 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:06:01.898729 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:06:01.898819 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:06:01.898907 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:06:01.899040 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:06:01.899131 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:06:01.899302 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:06:01.899483 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:06:01.899595 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:06:01.902840 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:06:01.902996 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:06:01.903100 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:06:01.903232 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:06:01.903330 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:06:01.903443 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:06:01.903542 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:06:01.903650 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:06:01.903765 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:06:01.903868 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:06:01.903975 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:06:01.904072 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:06:01.904168 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:06:01.904267 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:06:01.904363 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:06:01.904464 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:06:01.904582 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:06:01.906873 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:06:01.907024 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:06:01.907126 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:06:01.907269 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:06:01.907369 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:06:01.907466 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:06:01.907571 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:06:01.907691 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:06:01.907792 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:06:01.907887 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:06:01.907983 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:06:01.908108 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:06:01.908207 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:06:01.908311 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:06:01.908442 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:06:01.908559 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:06:01.910166 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:06:01.910315 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:06:01.910416 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:06:01.910532 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:06:01.910640 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:06:01.910751 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:06:01.910763 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:06:01.910773 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:06:01.910783 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:06:01.910792 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:06:01.910801 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:06:01.910814 kernel: iommu: Default domain type: Translated Sep 13 00:06:01.910824 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:06:01.910833 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:06:01.910842 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:06:01.910851 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:06:01.910860 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:06:01.910962 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:06:01.911061 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:06:01.911174 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:06:01.911191 kernel: vgaarb: loaded Sep 13 00:06:01.911203 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:06:01.911212 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:06:01.911222 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:06:01.911231 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:06:01.911240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:06:01.911249 kernel: pnp: PnP ACPI init Sep 13 00:06:01.911258 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:06:01.911271 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:06:01.911281 kernel: NET: Registered PF_INET protocol family Sep 13 00:06:01.911290 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:06:01.911299 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:06:01.911308 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:06:01.911317 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:06:01.911327 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:06:01.911336 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:06:01.911345 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:01.911357 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:06:01.911366 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:06:01.911375 kernel: NET: Registered PF_XDP protocol family Sep 13 00:06:01.911477 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:06:01.911569 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:06:01.911657 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:06:01.912203 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:06:01.912297 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:06:01.912412 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:06:01.912515 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:06:01.912529 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:06:01.912629 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 28845 usecs Sep 13 00:06:01.912642 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:06:01.912652 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:06:01.912661 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:06:01.912670 kernel: Initialise system trusted keyrings Sep 13 00:06:01.913611 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:06:01.913631 kernel: Key type asymmetric registered Sep 13 00:06:01.913640 kernel: Asymmetric key parser 'x509' registered Sep 13 00:06:01.913649 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:06:01.913659 kernel: io scheduler mq-deadline registered Sep 13 00:06:01.913668 kernel: io scheduler kyber registered Sep 13 00:06:01.913687 kernel: io scheduler bfq registered Sep 13 00:06:01.913697 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:06:01.913707 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:06:01.913716 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:06:01.913728 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:06:01.913737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:06:01.913746 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:06:01.913756 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:06:01.913765 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:06:01.913774 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:06:01.913920 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:06:01.913934 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 13 00:06:01.914045 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:06:01.914175 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:06:01 UTC (1757721961) Sep 13 00:06:01.914266 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:06:01.914278 kernel: intel_pstate: CPU model not supported Sep 13 00:06:01.914287 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:06:01.914296 kernel: Segment Routing with IPv6 Sep 13 00:06:01.914305 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:06:01.914315 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:06:01.914329 kernel: Key type dns_resolver registered Sep 13 00:06:01.914337 kernel: IPI shorthand broadcast: enabled Sep 13 00:06:01.914346 kernel: sched_clock: Marking stable (785004982, 99251457)->(964170017, -79913578) Sep 13 00:06:01.914356 kernel: registered taskstats version 1 Sep 13 00:06:01.914365 kernel: Loading compiled-in X.509 certificates Sep 13 00:06:01.914374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:06:01.914383 kernel: Key type .fscrypt registered Sep 13 00:06:01.914391 kernel: Key type fscrypt-provisioning registered Sep 13 00:06:01.914401 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:06:01.914412 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:06:01.914421 kernel: ima: No architecture policies found Sep 13 00:06:01.914430 kernel: clk: Disabling unused clocks Sep 13 00:06:01.914440 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:06:01.914449 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:06:01.914477 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:06:01.914489 kernel: Run /init as init process Sep 13 00:06:01.914499 kernel: with arguments: Sep 13 00:06:01.914508 kernel: /init Sep 13 00:06:01.914520 kernel: with environment: Sep 13 00:06:01.914530 kernel: HOME=/ Sep 13 00:06:01.914539 kernel: TERM=linux Sep 13 00:06:01.914549 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:06:01.914561 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:01.914573 systemd[1]: Detected virtualization kvm. Sep 13 00:06:01.914583 systemd[1]: Detected architecture x86-64. Sep 13 00:06:01.914595 systemd[1]: Running in initrd. Sep 13 00:06:01.914608 systemd[1]: No hostname configured, using default hostname. Sep 13 00:06:01.914618 systemd[1]: Hostname set to . Sep 13 00:06:01.914628 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:01.914637 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:06:01.914647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:01.914657 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:01.914667 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:06:01.914774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:01.914791 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:06:01.914801 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:06:01.914813 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:06:01.914823 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:06:01.914833 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:01.914843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:01.914853 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:01.914866 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:01.914877 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:01.914890 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:01.914900 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:01.914910 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:01.914922 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:06:01.914933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:06:01.914943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:01.914952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:01.914962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:01.914972 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:01.914982 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:06:01.914992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:01.915002 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:06:01.915015 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:06:01.915025 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:01.915035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:01.915045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:01.915055 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:01.915065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:01.915102 systemd-journald[184]: Collecting audit messages is disabled. Sep 13 00:06:01.915129 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:06:01.915140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:06:01.915154 systemd-journald[184]: Journal started Sep 13 00:06:01.915213 systemd-journald[184]: Runtime Journal (/run/log/journal/e294abfd2c4d49e78026f5874826d7cd) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:06:01.896558 systemd-modules-load[185]: Inserted module 'overlay' Sep 13 00:06:01.924159 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:01.940892 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:01.953729 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:06:01.953769 kernel: Bridge firewalling registered Sep 13 00:06:01.945107 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 13 00:06:01.959304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:01.960862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:01.962138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:06:01.971903 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:01.973858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:01.976915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:01.977554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:01.994138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:01.996993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:02.003522 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:02.005110 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:02.008898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:06:02.032427 systemd-resolved[217]: Positive Trust Anchors: Sep 13 00:06:02.032445 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:02.032483 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:02.036120 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 13 00:06:02.037669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:02.038497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:02.040843 dracut-cmdline[221]: dracut-dracut-053 Sep 13 00:06:02.044018 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:06:02.126737 kernel: SCSI subsystem initialized Sep 13 00:06:02.135717 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:06:02.146719 kernel: iscsi: registered transport (tcp) Sep 13 00:06:02.168716 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:06:02.168794 kernel: QLogic iSCSI HBA Driver Sep 13 00:06:02.231546 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:02.236885 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:06:02.274172 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:06:02.274245 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:06:02.275403 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:06:02.321736 kernel: raid6: avx2x4 gen() 17777 MB/s Sep 13 00:06:02.336742 kernel: raid6: avx2x2 gen() 16334 MB/s Sep 13 00:06:02.354055 kernel: raid6: avx2x1 gen() 13450 MB/s Sep 13 00:06:02.354144 kernel: raid6: using algorithm avx2x4 gen() 17777 MB/s Sep 13 00:06:02.371865 kernel: raid6: .... xor() 6980 MB/s, rmw enabled Sep 13 00:06:02.371964 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:06:02.393715 kernel: xor: automatically using best checksumming function avx Sep 13 00:06:02.554718 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:06:02.566921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:02.572923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:02.597803 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 13 00:06:02.602903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:02.609857 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:06:02.626564 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 13 00:06:02.661789 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:02.672927 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:02.740214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:02.750018 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:06:02.776657 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:02.780328 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:02.780965 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:02.781320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:02.787276 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:06:02.812233 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:02.827884 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 13 00:06:02.833693 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:06:02.836721 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:06:02.843386 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:06:02.859801 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:06:02.859867 kernel: GPT:9289727 != 125829119 Sep 13 00:06:02.859880 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:06:02.859892 kernel: GPT:9289727 != 125829119 Sep 13 00:06:02.859904 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:06:02.859925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:02.869720 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 13 00:06:02.874287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:02.874430 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:02.882100 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 13 00:06:02.876243 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:02.876668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:02.876824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:02.882372 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:02.890006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:02.903980 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:06:02.904035 kernel: AES CTR mode by8 optimization enabled Sep 13 00:06:02.905094 kernel: ACPI: bus type USB registered Sep 13 00:06:02.909774 kernel: usbcore: registered new interface driver usbfs Sep 13 00:06:02.909825 kernel: usbcore: registered new interface driver hub Sep 13 00:06:02.910745 kernel: usbcore: registered new device driver usb Sep 13 00:06:02.932708 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Sep 13 00:06:02.961705 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:06:02.967116 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:06:02.967383 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:06:02.967574 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 13 00:06:02.962624 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:06:03.005091 kernel: hub 1-0:1.0: USB hub found Sep 13 00:06:03.005417 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:06:03.005613 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (455) Sep 13 00:06:03.005636 kernel: libata version 3.00 loaded. Sep 13 00:06:03.005658 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:06:03.005881 kernel: scsi host1: ata_piix Sep 13 00:06:03.006061 kernel: scsi host2: ata_piix Sep 13 00:06:03.006224 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:06:03.006244 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:06:03.005909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:06:03.006518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:03.020870 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:06:03.028597 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:06:03.035727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:06:03.046899 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:06:03.050727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:06:03.052870 disk-uuid[533]: Primary Header is updated. Sep 13 00:06:03.052870 disk-uuid[533]: Secondary Entries is updated. Sep 13 00:06:03.052870 disk-uuid[533]: Secondary Header is updated. Sep 13 00:06:03.059706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:03.072707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:03.079014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:04.066759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:06:04.067133 disk-uuid[534]: The operation has completed successfully. Sep 13 00:06:04.105314 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:06:04.105425 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:06:04.121921 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:06:04.126228 sh[562]: Success Sep 13 00:06:04.139822 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:06:04.207459 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:06:04.208409 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:06:04.214919 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:06:04.240861 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:06:04.240930 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:04.241884 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:06:04.242800 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:06:04.244106 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:06:04.253802 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:06:04.254991 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:06:04.260925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:06:04.263637 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:06:04.278700 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:04.278769 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:04.278783 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:04.287068 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:06:04.300812 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:04.300316 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:06:04.306686 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:06:04.315255 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:06:04.408082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:04.419222 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:04.448192 systemd-networkd[745]: lo: Link UP Sep 13 00:06:04.448204 systemd-networkd[745]: lo: Gained carrier Sep 13 00:06:04.452231 systemd-networkd[745]: Enumeration completed Sep 13 00:06:04.452674 ignition[657]: Ignition 2.19.0 Sep 13 00:06:04.452357 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:04.452713 ignition[657]: Stage: fetch-offline Sep 13 00:06:04.453371 systemd[1]: Reached target network.target - Network. Sep 13 00:06:04.452787 ignition[657]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:04.454102 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:06:04.452803 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:04.454107 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:06:04.452949 ignition[657]: parsed url from cmdline: "" Sep 13 00:06:04.455653 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:04.452953 ignition[657]: no config URL provided Sep 13 00:06:04.455657 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:06:04.452959 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:04.456261 systemd-networkd[745]: eth0: Link UP Sep 13 00:06:04.452969 ignition[657]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:04.456265 systemd-networkd[745]: eth0: Gained carrier Sep 13 00:06:04.452975 ignition[657]: failed to fetch config: resource requires networking Sep 13 00:06:04.456274 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:06:04.453211 ignition[657]: Ignition finished successfully Sep 13 00:06:04.457664 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:04.460721 systemd-networkd[745]: eth1: Link UP Sep 13 00:06:04.460726 systemd-networkd[745]: eth1: Gained carrier Sep 13 00:06:04.460739 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:06:04.462847 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:06:04.473759 systemd-networkd[745]: eth0: DHCPv4 address 143.198.49.51/20, gateway 143.198.48.1 acquired from 169.254.169.253 Sep 13 00:06:04.481816 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Sep 13 00:06:04.489310 ignition[752]: Ignition 2.19.0 Sep 13 00:06:04.489323 ignition[752]: Stage: fetch Sep 13 00:06:04.489520 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:04.489531 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:04.489647 ignition[752]: parsed url from cmdline: "" Sep 13 00:06:04.489655 ignition[752]: no config URL provided Sep 13 00:06:04.489662 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:06:04.489720 ignition[752]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:06:04.489746 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:06:04.503812 ignition[752]: GET result: OK Sep 13 00:06:04.504131 ignition[752]: parsing config with SHA512: f9b7cfbcb2021ccfad20e5d1e81c2356cad9eb6591103abaa53b8508f9f0b0d9c83557e131ea272c2fb59e3c6a993e8c5df405ab097f1c177ffd729395aac23b Sep 13 00:06:04.509195 unknown[752]: fetched base config from "system" Sep 13 00:06:04.510068 ignition[752]: fetch: fetch complete Sep 13 00:06:04.509223 unknown[752]: fetched base config from "system" Sep 13 00:06:04.510078 ignition[752]: fetch: fetch passed Sep 13 00:06:04.509230 unknown[752]: fetched user config from "digitalocean" Sep 13 00:06:04.510132 ignition[752]: Ignition finished successfully Sep 13 00:06:04.512119 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:06:04.515874 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:06:04.536716 ignition[760]: Ignition 2.19.0 Sep 13 00:06:04.538213 ignition[760]: Stage: kargs Sep 13 00:06:04.538496 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:04.538510 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:04.541753 ignition[760]: kargs: kargs passed Sep 13 00:06:04.541842 ignition[760]: Ignition finished successfully Sep 13 00:06:04.543259 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:06:04.548922 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:06:04.567726 ignition[766]: Ignition 2.19.0 Sep 13 00:06:04.567737 ignition[766]: Stage: disks Sep 13 00:06:04.567966 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:04.567977 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:04.570168 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:06:04.568879 ignition[766]: disks: disks passed Sep 13 00:06:04.568936 ignition[766]: Ignition finished successfully Sep 13 00:06:04.574125 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:04.574735 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:06:04.575360 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:04.576114 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:04.576771 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:04.589957 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:06:04.604269 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:06:04.607779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:06:04.612855 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:06:04.711721 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:06:04.711645 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:06:04.712542 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:04.722834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:04.725374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:06:04.726826 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 13 00:06:04.736792 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (783) Sep 13 00:06:04.736010 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:06:04.738552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:06:04.738603 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:04.749583 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:04.749611 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:04.749624 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:04.747792 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:06:04.759725 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:06:04.760973 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:06:04.763449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:04.822866 coreos-metadata[785]: Sep 13 00:06:04.822 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:06:04.824496 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:06:04.828862 coreos-metadata[786]: Sep 13 00:06:04.828 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:06:04.832801 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:06:04.835698 coreos-metadata[785]: Sep 13 00:06:04.835 INFO Fetch successful Sep 13 00:06:04.837022 coreos-metadata[786]: Sep 13 00:06:04.835 INFO Fetch successful Sep 13 00:06:04.842592 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:06:04.843519 coreos-metadata[786]: Sep 13 00:06:04.843 INFO wrote hostname ci-4081.3.5-n-3ba90871da to /sysroot/etc/hostname Sep 13 00:06:04.844491 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:06:04.845491 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:06:04.845603 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 13 00:06:04.851317 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:06:04.942628 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:04.946793 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:06:04.948851 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:06:04.961699 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:04.985571 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:06:04.991738 ignition[904]: INFO : Ignition 2.19.0 Sep 13 00:06:04.991738 ignition[904]: INFO : Stage: mount Sep 13 00:06:04.992670 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:04.992670 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:04.993592 ignition[904]: INFO : mount: mount passed Sep 13 00:06:04.993592 ignition[904]: INFO : Ignition finished successfully Sep 13 00:06:04.995206 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:06:04.999880 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:06:05.240917 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:06:05.252024 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:06:05.260705 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (917) Sep 13 00:06:05.263111 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:06:05.263181 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:06:05.263196 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:06:05.266703 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:06:05.268785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:06:05.293030 ignition[934]: INFO : Ignition 2.19.0 Sep 13 00:06:05.293719 ignition[934]: INFO : Stage: files Sep 13 00:06:05.294236 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:05.295751 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:05.295751 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:06:05.297629 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:06:05.298159 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:06:05.304560 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:06:05.305318 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:06:05.306167 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:06:05.306050 unknown[934]: wrote ssh authorized keys file for user: core Sep 13 00:06:05.307567 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:06:05.308253 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 00:06:05.340367 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:06:05.501465 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 00:06:05.502200 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:05.502200 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:06:05.502200 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:05.502200 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:06:05.502200 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:06:05.505173 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 00:06:05.792068 systemd-networkd[745]: eth0: Gained IPv6LL Sep 13 00:06:05.929573 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:06:06.495997 systemd-networkd[745]: eth1: Gained IPv6LL Sep 13 00:06:06.816291 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 00:06:06.816291 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:06:06.818018 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:06.818018 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:06:06.818018 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:06:06.818018 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:06.818018 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:06:06.818018 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:06.818018 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:06:06.818018 ignition[934]: INFO : files: files passed Sep 13 00:06:06.818018 ignition[934]: INFO : Ignition finished successfully Sep 13 00:06:06.819068 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:06:06.826886 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:06:06.829921 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:06:06.831237 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:06:06.831333 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:06:06.844928 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:06.844928 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:06.847216 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:06:06.848749 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:06.849905 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:06:06.854893 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:06:06.884484 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:06:06.884604 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:06:06.885548 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:06:06.886009 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:06:06.886738 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:06:06.892845 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:06:06.907622 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:06.914903 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:06:06.924134 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:06.925190 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:06.926147 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:06:06.926513 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:06:06.926642 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:06:06.927254 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:06:06.927625 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:06:06.929827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:06:06.930215 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:06:06.930618 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:06:06.931040 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:06:06.931536 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:06:06.932045 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:06:06.932790 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:06:06.933600 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:06:06.934356 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:06:06.934497 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:06:06.935463 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:06.936045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:06.936850 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:06:06.937004 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:06.937579 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:06:06.937759 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:06:06.938541 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:06:06.938662 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:06:06.939586 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:06:06.939741 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:06:06.940359 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:06:06.940509 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:06:06.948418 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:06:06.948822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:06:06.949026 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:06.950932 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:06:06.951460 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:06:06.952827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:06.955414 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:06:06.955581 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:06:06.964573 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:06:06.964673 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:06:06.971344 ignition[986]: INFO : Ignition 2.19.0 Sep 13 00:06:06.971344 ignition[986]: INFO : Stage: umount Sep 13 00:06:06.971344 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:06:06.971344 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:06:06.976266 ignition[986]: INFO : umount: umount passed Sep 13 00:06:06.976266 ignition[986]: INFO : Ignition finished successfully Sep 13 00:06:06.979065 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:06:06.979790 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:06:06.980780 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:06:06.980899 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:06:06.988174 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:06:06.988828 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:06:06.989307 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:06:06.989359 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:06:06.990062 systemd[1]: Stopped target network.target - Network. Sep 13 00:06:06.990845 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:06:06.990916 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:06:06.991727 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:06:06.992371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:06:06.995778 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:06.996294 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:06:06.997173 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:06:06.997981 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:06:06.998035 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:06:06.998670 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:06:06.998724 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:06:06.999429 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:06:06.999485 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:06:07.000258 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:06:07.000318 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:06:07.001189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:06:07.001957 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:06:07.004027 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:06:07.004734 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:06:07.004916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:06:07.006151 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:06:07.006303 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:06:07.006388 systemd-networkd[745]: eth0: DHCPv6 lease lost Sep 13 00:06:07.006634 systemd-networkd[745]: eth1: DHCPv6 lease lost Sep 13 00:06:07.009479 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:06:07.009618 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:06:07.011084 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:06:07.011209 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:07.019870 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:06:07.020982 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:06:07.021484 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:06:07.022015 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:07.024846 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:06:07.025236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:06:07.031137 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:06:07.031790 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:07.032334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:06:07.032404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:07.032942 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:06:07.033004 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:07.040044 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:06:07.040790 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:07.041481 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:06:07.042743 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:06:07.043842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:06:07.043915 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:07.044339 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:06:07.044376 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:07.045141 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:06:07.045210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:06:07.046458 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:06:07.046509 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:06:07.047441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:06:07.047485 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:06:07.053883 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:06:07.054500 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:06:07.054584 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:07.055693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:07.055773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:07.066831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:06:07.067868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:06:07.069174 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:06:07.078989 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:06:07.087246 systemd[1]: Switching root. Sep 13 00:06:07.129774 systemd-journald[184]: Journal stopped Sep 13 00:06:08.084741 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 13 00:06:08.084834 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:06:08.084854 kernel: SELinux: policy capability open_perms=1 Sep 13 00:06:08.084866 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:06:08.084879 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:06:08.084890 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:06:08.084903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:06:08.084915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:06:08.084930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:06:08.084943 systemd[1]: Successfully loaded SELinux policy in 35.335ms. Sep 13 00:06:08.084964 kernel: audit: type=1403 audit(1757721967.270:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:06:08.084977 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.004ms. Sep 13 00:06:08.084991 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:06:08.085005 systemd[1]: Detected virtualization kvm. Sep 13 00:06:08.085018 systemd[1]: Detected architecture x86-64. Sep 13 00:06:08.085031 systemd[1]: Detected first boot. Sep 13 00:06:08.085046 systemd[1]: Hostname set to . Sep 13 00:06:08.085060 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:06:08.085072 zram_generator::config[1029]: No configuration found. Sep 13 00:06:08.085086 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:06:08.085099 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:06:08.085111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:06:08.085123 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:08.085137 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:06:08.085153 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:06:08.085166 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:06:08.085201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:06:08.085214 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:06:08.085227 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:06:08.087445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:06:08.087478 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:06:08.087497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:06:08.087516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:06:08.087544 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:06:08.087564 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:06:08.087579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:06:08.087592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:06:08.087605 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:06:08.087618 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:06:08.087644 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:06:08.087661 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:06:08.087673 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:06:08.089296 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:06:08.089318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:06:08.089339 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:06:08.089351 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:06:08.089364 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:06:08.089377 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:06:08.089394 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:06:08.089407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:06:08.089420 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:06:08.089433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:06:08.089445 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:06:08.089458 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:06:08.089470 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:06:08.089483 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:06:08.089496 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:08.089512 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:06:08.089529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:06:08.089548 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:06:08.089566 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:06:08.089586 systemd[1]: Reached target machines.target - Containers. Sep 13 00:06:08.089607 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:06:08.089626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:08.089646 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:06:08.089669 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:06:08.089696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:08.089709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:08.089721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:08.089733 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:06:08.089746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:08.089759 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:06:08.089771 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:06:08.089784 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:06:08.089800 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:06:08.089813 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:06:08.089826 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:06:08.089838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:06:08.089851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:06:08.089864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:06:08.089876 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:06:08.089889 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:06:08.089902 systemd[1]: Stopped verity-setup.service. Sep 13 00:06:08.089919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:08.089931 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:06:08.089944 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:06:08.089957 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:06:08.090006 systemd-journald[1105]: Collecting audit messages is disabled. Sep 13 00:06:08.091010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:06:08.091038 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:06:08.091057 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:06:08.091073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:06:08.091086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:06:08.091100 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:06:08.091113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:08.091133 systemd-journald[1105]: Journal started Sep 13 00:06:08.091181 systemd-journald[1105]: Runtime Journal (/run/log/journal/e294abfd2c4d49e78026f5874826d7cd) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:06:07.842042 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:06:07.862330 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:06:08.092788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:07.862811 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:06:08.096010 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:06:08.095961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:08.096152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:08.097730 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:06:08.098324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:06:08.098917 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:06:08.114719 kernel: loop: module loaded Sep 13 00:06:08.114793 kernel: fuse: init (API version 7.39) Sep 13 00:06:08.116994 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:06:08.117858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:06:08.118539 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:08.119838 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:08.121467 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:06:08.137913 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:06:08.143393 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:06:08.145783 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:06:08.145826 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:06:08.147283 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:06:08.159969 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:06:08.164931 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:06:08.165471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:08.168852 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:06:08.172880 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:06:08.173432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:08.175813 kernel: ACPI: bus type drm_connector registered Sep 13 00:06:08.176025 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:06:08.178805 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:08.181968 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:06:08.184863 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:06:08.188744 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:06:08.189461 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:08.189605 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:08.190260 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:06:08.190670 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:06:08.195951 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:06:08.196626 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:06:08.208813 systemd-journald[1105]: Time spent on flushing to /var/log/journal/e294abfd2c4d49e78026f5874826d7cd is 84.955ms for 986 entries. Sep 13 00:06:08.208813 systemd-journald[1105]: System Journal (/var/log/journal/e294abfd2c4d49e78026f5874826d7cd) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:06:08.311920 systemd-journald[1105]: Received client request to flush runtime journal. Sep 13 00:06:08.312004 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 00:06:08.210315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:06:08.216955 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:06:08.219604 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:06:08.277709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:06:08.280388 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:06:08.305748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:06:08.318065 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:06:08.325196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:06:08.336285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:06:08.343906 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:06:08.353715 kernel: loop1: detected capacity change from 0 to 140768 Sep 13 00:06:08.354763 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:06:08.378940 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:06:08.385982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:06:08.395534 kernel: loop2: detected capacity change from 0 to 8 Sep 13 00:06:08.417707 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:06:08.457038 kernel: loop4: detected capacity change from 0 to 229808 Sep 13 00:06:08.478832 kernel: loop5: detected capacity change from 0 to 140768 Sep 13 00:06:08.485191 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Sep 13 00:06:08.485211 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Sep 13 00:06:08.497720 kernel: loop6: detected capacity change from 0 to 8 Sep 13 00:06:08.503759 kernel: loop7: detected capacity change from 0 to 142488 Sep 13 00:06:08.515505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:06:08.521436 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 13 00:06:08.522145 (sd-merge)[1174]: Merged extensions into '/usr'. Sep 13 00:06:08.533898 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:06:08.533917 systemd[1]: Reloading... Sep 13 00:06:08.683721 zram_generator::config[1201]: No configuration found. Sep 13 00:06:08.838197 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:06:08.884895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:08.938422 systemd[1]: Reloading finished in 402 ms. Sep 13 00:06:08.959896 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:06:08.961157 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:06:08.971942 systemd[1]: Starting ensure-sysext.service... Sep 13 00:06:08.978911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:06:08.997885 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:06:08.997909 systemd[1]: Reloading... Sep 13 00:06:09.025194 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:06:09.028130 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:06:09.030700 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:06:09.031118 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 13 00:06:09.031373 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 13 00:06:09.035044 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:09.035234 systemd-tmpfiles[1245]: Skipping /boot Sep 13 00:06:09.063669 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:06:09.065874 systemd-tmpfiles[1245]: Skipping /boot Sep 13 00:06:09.070701 zram_generator::config[1268]: No configuration found. Sep 13 00:06:09.277277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:09.332791 systemd[1]: Reloading finished in 334 ms. Sep 13 00:06:09.347575 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:06:09.355133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:06:09.361601 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:09.363888 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:06:09.367900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:06:09.371866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:06:09.373896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:06:09.381941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:06:09.388947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.389173 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:09.392591 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:09.403006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:09.411977 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:09.413082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:09.413220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.419815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.420055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:09.420260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:09.431033 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:06:09.431735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.432797 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:06:09.441120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.441402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:09.449072 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:06:09.449615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:09.449776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.450389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:09.450637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:09.455715 systemd[1]: Finished ensure-sysext.service. Sep 13 00:06:09.456340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:09.456493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:09.458097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:09.465844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:06:09.471638 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:06:09.471839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:06:09.477469 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:09.477643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:09.478384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:09.492894 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:06:09.505980 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:06:09.516822 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:06:09.517382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:09.520425 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Sep 13 00:06:09.524448 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:06:09.526796 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:06:09.536690 augenrules[1358]: No rules Sep 13 00:06:09.537529 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:09.564627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:06:09.571907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:06:09.621351 systemd-resolved[1321]: Positive Trust Anchors: Sep 13 00:06:09.621365 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:06:09.621403 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:06:09.629010 systemd-resolved[1321]: Using system hostname 'ci-4081.3.5-n-3ba90871da'. Sep 13 00:06:09.631104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:06:09.631618 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:06:09.667592 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:06:09.668198 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:06:09.701518 systemd-networkd[1366]: lo: Link UP Sep 13 00:06:09.701804 systemd-networkd[1366]: lo: Gained carrier Sep 13 00:06:09.703226 systemd-networkd[1366]: Enumeration completed Sep 13 00:06:09.703727 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:06:09.704234 systemd[1]: Reached target network.target - Network. Sep 13 00:06:09.714131 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:06:09.765882 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 13 00:06:09.766266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.766415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:06:09.774899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:06:09.777645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:06:09.780851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:06:09.781307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:06:09.781347 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:06:09.781365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:06:09.781632 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:06:09.789717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1371) Sep 13 00:06:09.798205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:06:09.799795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:06:09.818699 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:06:09.823855 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 13 00:06:09.841953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:06:09.842333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:06:09.843200 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:06:09.843583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:06:09.845020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:06:09.845082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:06:09.852709 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:06:09.856573 systemd-networkd[1366]: eth1: Configuring with /run/systemd/network/10-2e:39:ba:50:2e:22.network. Sep 13 00:06:09.857215 systemd-networkd[1366]: eth1: Link UP Sep 13 00:06:09.857220 systemd-networkd[1366]: eth1: Gained carrier Sep 13 00:06:09.859180 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:06:09.861035 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:09.876797 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:06:09.895985 systemd-networkd[1366]: eth0: Configuring with /run/systemd/network/10-6e:40:7b:04:8d:45.network. Sep 13 00:06:09.897012 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:09.898239 systemd-networkd[1366]: eth0: Link UP Sep 13 00:06:09.898335 systemd-networkd[1366]: eth0: Gained carrier Sep 13 00:06:09.901790 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:09.902151 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:09.935344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:06:09.938739 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:06:09.943860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:06:09.970709 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:06:09.978044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:09.980050 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:06:10.066712 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 13 00:06:10.066807 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 13 00:06:10.076660 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:06:10.076769 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:06:10.076814 kernel: [drm] features: -context_init Sep 13 00:06:10.076830 kernel: [drm] number of scanouts: 1 Sep 13 00:06:10.076844 kernel: [drm] number of cap sets: 0 Sep 13 00:06:10.075915 systemd-vconsole-setup[1410]: KD_FONT_OP_GET failed while trying to read the font data: Function not implemented Sep 13 00:06:10.075921 systemd-vconsole-setup[1410]: Fonts will not be copied to remaining consoles Sep 13 00:06:10.078554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:10.079747 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 13 00:06:10.088770 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:06:10.088852 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:06:10.093105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:10.093244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:10.095527 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:06:10.097214 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:10.109215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:10.122430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:06:10.122737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:10.127973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:06:10.136773 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:06:10.165082 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:06:10.173020 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:06:10.186105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:06:10.189701 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:10.225748 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:06:10.226217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:06:10.226315 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:06:10.226488 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:06:10.227026 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:06:10.228041 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:06:10.228196 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:06:10.228265 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:06:10.228323 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:06:10.228349 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:06:10.228394 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:06:10.231777 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:06:10.234026 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:06:10.240867 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:06:10.244764 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:06:10.248020 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:06:10.248589 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:06:10.249016 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:06:10.249438 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:10.249462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:06:10.253816 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:06:10.256601 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:06:10.265895 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:06:10.271558 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:06:10.274810 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:06:10.279635 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:06:10.280146 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:06:10.289949 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:06:10.293174 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:06:10.300883 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:06:10.304122 jq[1436]: false Sep 13 00:06:10.310909 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:06:10.321856 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:06:10.322797 dbus-daemon[1435]: [system] SELinux support is enabled Sep 13 00:06:10.323846 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:06:10.324716 extend-filesystems[1437]: Found loop4 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found loop5 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found loop6 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found loop7 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda1 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda2 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda3 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found usr Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda4 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda6 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda7 Sep 13 00:06:10.328465 extend-filesystems[1437]: Found vda9 Sep 13 00:06:10.328465 extend-filesystems[1437]: Checking size of /dev/vda9 Sep 13 00:06:10.324896 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:06:10.329918 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:06:10.346851 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:06:10.354801 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:06:10.360674 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:06:10.384014 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:06:10.385004 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:06:10.389940 jq[1448]: true Sep 13 00:06:10.400938 extend-filesystems[1437]: Resized partition /dev/vda9 Sep 13 00:06:10.404303 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:06:10.404757 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:06:10.429941 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:06:10.421148 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:06:10.430131 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:06:10.421190 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:06:10.421683 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:06:10.422832 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 13 00:06:10.422875 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:06:10.488333 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1381) Sep 13 00:06:10.493196 coreos-metadata[1434]: Sep 13 00:06:10.491 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:06:10.493508 update_engine[1446]: I20250913 00:06:10.492413 1446 main.cc:92] Flatcar Update Engine starting Sep 13 00:06:10.498991 jq[1459]: true Sep 13 00:06:10.504202 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:06:10.509068 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:06:10.518054 update_engine[1446]: I20250913 00:06:10.512703 1446 update_check_scheduler.cc:74] Next update check in 2m34s Sep 13 00:06:10.523462 coreos-metadata[1434]: Sep 13 00:06:10.523 INFO Fetch successful Sep 13 00:06:10.519933 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:06:10.524023 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:06:10.524442 tar[1454]: linux-amd64/LICENSE Sep 13 00:06:10.524650 tar[1454]: linux-amd64/helm Sep 13 00:06:10.524770 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:06:10.539707 systemd-logind[1445]: New seat seat0. Sep 13 00:06:10.544665 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:06:10.545822 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:06:10.546142 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:06:10.564709 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:06:10.574319 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:06:10.574319 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:06:10.574319 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:06:10.574936 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:06:10.587204 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Sep 13 00:06:10.587204 extend-filesystems[1437]: Found vdb Sep 13 00:06:10.576518 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:06:10.608820 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:10.604161 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:06:10.639990 systemd[1]: Starting sshkeys.service... Sep 13 00:06:10.684191 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:06:10.694353 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:06:10.759608 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:06:10.763167 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:06:10.771822 coreos-metadata[1499]: Sep 13 00:06:10.771 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:06:10.788924 coreos-metadata[1499]: Sep 13 00:06:10.786 INFO Fetch successful Sep 13 00:06:10.805816 unknown[1499]: wrote ssh authorized keys file for user: core Sep 13 00:06:10.867247 update-ssh-keys[1506]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:06:10.872011 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:06:10.877726 systemd[1]: Finished sshkeys.service. Sep 13 00:06:10.923415 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:06:11.096709 containerd[1472]: time="2025-09-13T00:06:11.094895467Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:06:11.142548 containerd[1472]: time="2025-09-13T00:06:11.142261456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144646357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144704140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144728021Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144886152Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144901648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144961351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.144973878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.145152989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.145167318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.145180782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:11.145766 containerd[1472]: time="2025-09-13T00:06:11.145189828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.146102 containerd[1472]: time="2025-09-13T00:06:11.145255785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.146102 containerd[1472]: time="2025-09-13T00:06:11.145446370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:06:11.146102 containerd[1472]: time="2025-09-13T00:06:11.145562552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:06:11.146102 containerd[1472]: time="2025-09-13T00:06:11.145576172Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:06:11.146102 containerd[1472]: time="2025-09-13T00:06:11.145645609Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:06:11.146102 containerd[1472]: time="2025-09-13T00:06:11.145704610Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:06:11.149207 containerd[1472]: time="2025-09-13T00:06:11.149165415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:06:11.149318 containerd[1472]: time="2025-09-13T00:06:11.149221383Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:06:11.149318 containerd[1472]: time="2025-09-13T00:06:11.149238447Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:06:11.149318 containerd[1472]: time="2025-09-13T00:06:11.149253218Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:06:11.149318 containerd[1472]: time="2025-09-13T00:06:11.149269394Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:06:11.149426 containerd[1472]: time="2025-09-13T00:06:11.149411474Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:06:11.149691 containerd[1472]: time="2025-09-13T00:06:11.149660765Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:06:11.149798 containerd[1472]: time="2025-09-13T00:06:11.149783656Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:06:11.149826 containerd[1472]: time="2025-09-13T00:06:11.149803039Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:06:11.149826 containerd[1472]: time="2025-09-13T00:06:11.149816566Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:06:11.149866 containerd[1472]: time="2025-09-13T00:06:11.149830092Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.149866 containerd[1472]: time="2025-09-13T00:06:11.149844553Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.149866 containerd[1472]: time="2025-09-13T00:06:11.149856380Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.149935 containerd[1472]: time="2025-09-13T00:06:11.149869749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.149935 containerd[1472]: time="2025-09-13T00:06:11.149884846Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.149935 containerd[1472]: time="2025-09-13T00:06:11.149898026Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.149935 containerd[1472]: time="2025-09-13T00:06:11.149923836Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.149939206Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.149959228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.149972163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.149983369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.149996988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.150008495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.150021358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.150032756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150043 containerd[1472]: time="2025-09-13T00:06:11.150045609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150057997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150071226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150092100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150108993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150121767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150135991Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150155150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150165980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150176436Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150218853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150237426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150249003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:06:11.150400 containerd[1472]: time="2025-09-13T00:06:11.150261424Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:06:11.150774 containerd[1472]: time="2025-09-13T00:06:11.150270211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150774 containerd[1472]: time="2025-09-13T00:06:11.150282584Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:06:11.150774 containerd[1472]: time="2025-09-13T00:06:11.150293119Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:06:11.150774 containerd[1472]: time="2025-09-13T00:06:11.150302950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:06:11.150868 containerd[1472]: time="2025-09-13T00:06:11.150587281Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:06:11.150868 containerd[1472]: time="2025-09-13T00:06:11.150647441Z" level=info msg="Connect containerd service" Sep 13 00:06:11.150868 containerd[1472]: time="2025-09-13T00:06:11.150740884Z" level=info msg="using legacy CRI server" Sep 13 00:06:11.150868 containerd[1472]: time="2025-09-13T00:06:11.150749550Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:06:11.151217 containerd[1472]: time="2025-09-13T00:06:11.150880152Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151606150Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151713498Z" level=info msg="Start subscribing containerd event" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151773715Z" level=info msg="Start recovering state" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151834746Z" level=info msg="Start event monitor" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151849593Z" level=info msg="Start snapshots syncer" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151859531Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.151867403Z" level=info msg="Start streaming server" Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.152355474Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:06:11.154696 containerd[1472]: time="2025-09-13T00:06:11.152414538Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:06:11.153793 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:06:11.158299 containerd[1472]: time="2025-09-13T00:06:11.155501991Z" level=info msg="containerd successfully booted in 0.063329s" Sep 13 00:06:11.168054 systemd-networkd[1366]: eth0: Gained IPv6LL Sep 13 00:06:11.168500 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:11.170674 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:06:11.174182 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:06:11.184751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:11.187400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:06:11.251102 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:06:11.251791 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:06:11.278881 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:06:11.289508 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:06:11.305366 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:06:11.305543 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:06:11.313958 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:06:11.339067 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:06:11.351316 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:06:11.361046 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:06:11.361744 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:06:11.488029 systemd-networkd[1366]: eth1: Gained IPv6LL Sep 13 00:06:11.488797 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:11.571309 tar[1454]: linux-amd64/README.md Sep 13 00:06:11.585269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:06:12.250990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:12.252268 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:06:12.256372 systemd[1]: Startup finished in 919ms (kernel) + 5.580s (initrd) + 5.020s (userspace) = 11.520s. Sep 13 00:06:12.263456 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:12.927463 kubelet[1556]: E0913 00:06:12.927390 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:12.930105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:12.930298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:12.931259 systemd[1]: kubelet.service: Consumed 1.238s CPU time. Sep 13 00:06:16.603212 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:06:16.611788 systemd[1]: Started sshd@0-143.198.49.51:22-139.178.68.195:54898.service - OpenSSH per-connection server daemon (139.178.68.195:54898). Sep 13 00:06:16.698540 sshd[1568]: Accepted publickey for core from 139.178.68.195 port 54898 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:16.700402 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:16.710394 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:06:16.717150 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:06:16.720782 systemd-logind[1445]: New session 1 of user core. Sep 13 00:06:16.735497 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:06:16.748322 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:06:16.751766 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:06:16.865958 systemd[1572]: Queued start job for default target default.target. Sep 13 00:06:16.876953 systemd[1572]: Created slice app.slice - User Application Slice. Sep 13 00:06:16.876988 systemd[1572]: Reached target paths.target - Paths. Sep 13 00:06:16.877003 systemd[1572]: Reached target timers.target - Timers. Sep 13 00:06:16.878527 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:06:16.898887 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:06:16.899033 systemd[1572]: Reached target sockets.target - Sockets. Sep 13 00:06:16.899050 systemd[1572]: Reached target basic.target - Basic System. Sep 13 00:06:16.899103 systemd[1572]: Reached target default.target - Main User Target. Sep 13 00:06:16.899182 systemd[1572]: Startup finished in 140ms. Sep 13 00:06:16.899317 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:06:16.905949 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:06:16.976421 systemd[1]: Started sshd@1-143.198.49.51:22-139.178.68.195:54910.service - OpenSSH per-connection server daemon (139.178.68.195:54910). Sep 13 00:06:17.025969 sshd[1583]: Accepted publickey for core from 139.178.68.195 port 54910 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:17.028032 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.033543 systemd-logind[1445]: New session 2 of user core. Sep 13 00:06:17.039031 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:06:17.100846 sshd[1583]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:17.109721 systemd[1]: sshd@1-143.198.49.51:22-139.178.68.195:54910.service: Deactivated successfully. Sep 13 00:06:17.111794 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:06:17.113907 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:06:17.119041 systemd[1]: Started sshd@2-143.198.49.51:22-139.178.68.195:54922.service - OpenSSH per-connection server daemon (139.178.68.195:54922). Sep 13 00:06:17.120872 systemd-logind[1445]: Removed session 2. Sep 13 00:06:17.169316 sshd[1590]: Accepted publickey for core from 139.178.68.195 port 54922 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:17.171098 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.177915 systemd-logind[1445]: New session 3 of user core. Sep 13 00:06:17.184020 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:06:17.241700 sshd[1590]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:17.253620 systemd[1]: sshd@2-143.198.49.51:22-139.178.68.195:54922.service: Deactivated successfully. Sep 13 00:06:17.256012 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:06:17.257865 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:06:17.267343 systemd[1]: Started sshd@3-143.198.49.51:22-139.178.68.195:54930.service - OpenSSH per-connection server daemon (139.178.68.195:54930). Sep 13 00:06:17.269658 systemd-logind[1445]: Removed session 3. Sep 13 00:06:17.310073 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 54930 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:17.311897 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.318207 systemd-logind[1445]: New session 4 of user core. Sep 13 00:06:17.323984 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:06:17.387487 sshd[1597]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:17.399831 systemd[1]: sshd@3-143.198.49.51:22-139.178.68.195:54930.service: Deactivated successfully. Sep 13 00:06:17.401974 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:06:17.403750 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:06:17.407174 systemd[1]: Started sshd@4-143.198.49.51:22-139.178.68.195:54944.service - OpenSSH per-connection server daemon (139.178.68.195:54944). Sep 13 00:06:17.409198 systemd-logind[1445]: Removed session 4. Sep 13 00:06:17.455984 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 54944 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:17.457571 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.463784 systemd-logind[1445]: New session 5 of user core. Sep 13 00:06:17.465870 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:06:17.531405 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:06:17.531824 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:17.544394 sudo[1607]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:17.548417 sshd[1604]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:17.560901 systemd[1]: sshd@4-143.198.49.51:22-139.178.68.195:54944.service: Deactivated successfully. Sep 13 00:06:17.563189 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:06:17.565405 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:06:17.571052 systemd[1]: Started sshd@5-143.198.49.51:22-139.178.68.195:54948.service - OpenSSH per-connection server daemon (139.178.68.195:54948). Sep 13 00:06:17.572766 systemd-logind[1445]: Removed session 5. Sep 13 00:06:17.614900 sshd[1612]: Accepted publickey for core from 139.178.68.195 port 54948 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:17.616840 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.622799 systemd-logind[1445]: New session 6 of user core. Sep 13 00:06:17.627984 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:06:17.689970 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:06:17.690265 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:17.695166 sudo[1616]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:17.701603 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:06:17.702072 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:17.718525 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:17.720772 auditctl[1619]: No rules Sep 13 00:06:17.721201 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:06:17.721443 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:17.724569 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:06:17.762391 augenrules[1637]: No rules Sep 13 00:06:17.763840 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:06:17.765024 sudo[1615]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:17.769004 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:17.780824 systemd[1]: sshd@5-143.198.49.51:22-139.178.68.195:54948.service: Deactivated successfully. Sep 13 00:06:17.782602 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:06:17.783351 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:06:17.789004 systemd[1]: Started sshd@6-143.198.49.51:22-139.178.68.195:54964.service - OpenSSH per-connection server daemon (139.178.68.195:54964). Sep 13 00:06:17.790561 systemd-logind[1445]: Removed session 6. Sep 13 00:06:17.841202 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 54964 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:06:17.842937 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:17.848859 systemd-logind[1445]: New session 7 of user core. Sep 13 00:06:17.855054 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:06:17.914669 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:06:17.915499 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:06:18.359227 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:06:18.359626 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:06:18.786454 dockerd[1665]: time="2025-09-13T00:06:18.785845841Z" level=info msg="Starting up" Sep 13 00:06:18.912175 dockerd[1665]: time="2025-09-13T00:06:18.912126868Z" level=info msg="Loading containers: start." Sep 13 00:06:19.023707 kernel: Initializing XFRM netlink socket Sep 13 00:06:19.052263 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:19.063824 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:19.108239 systemd-networkd[1366]: docker0: Link UP Sep 13 00:06:19.108522 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Sep 13 00:06:19.129599 dockerd[1665]: time="2025-09-13T00:06:19.129557621Z" level=info msg="Loading containers: done." Sep 13 00:06:19.146597 dockerd[1665]: time="2025-09-13T00:06:19.146031033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:06:19.146597 dockerd[1665]: time="2025-09-13T00:06:19.146179397Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:06:19.146597 dockerd[1665]: time="2025-09-13T00:06:19.146326686Z" level=info msg="Daemon has completed initialization" Sep 13 00:06:19.177283 dockerd[1665]: time="2025-09-13T00:06:19.177134671Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:06:19.177249 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:06:20.084506 containerd[1472]: time="2025-09-13T00:06:20.084468815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:06:20.644476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427237530.mount: Deactivated successfully. Sep 13 00:06:21.867880 containerd[1472]: time="2025-09-13T00:06:21.867802947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:21.869086 containerd[1472]: time="2025-09-13T00:06:21.869014887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 00:06:21.871705 containerd[1472]: time="2025-09-13T00:06:21.869571940Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:21.873946 containerd[1472]: time="2025-09-13T00:06:21.873883337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:21.875342 containerd[1472]: time="2025-09-13T00:06:21.875294878Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.790785445s" Sep 13 00:06:21.875342 containerd[1472]: time="2025-09-13T00:06:21.875346739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 00:06:21.876153 containerd[1472]: time="2025-09-13T00:06:21.876122648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:06:23.180622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:06:23.191020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:23.329125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:23.335080 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:06:23.401285 kubelet[1881]: E0913 00:06:23.401204 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:06:23.405803 containerd[1472]: time="2025-09-13T00:06:23.405747534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.408414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:06:23.408747 containerd[1472]: time="2025-09-13T00:06:23.408570219Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 00:06:23.409083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:06:23.409452 containerd[1472]: time="2025-09-13T00:06:23.409399694Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.412919 containerd[1472]: time="2025-09-13T00:06:23.412500257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:23.413792 containerd[1472]: time="2025-09-13T00:06:23.413629640Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.537473601s" Sep 13 00:06:23.413792 containerd[1472]: time="2025-09-13T00:06:23.413668624Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 00:06:23.416175 containerd[1472]: time="2025-09-13T00:06:23.415876470Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:06:24.580346 containerd[1472]: time="2025-09-13T00:06:24.580288508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:24.581551 containerd[1472]: time="2025-09-13T00:06:24.581442129Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 00:06:24.582115 containerd[1472]: time="2025-09-13T00:06:24.582078597Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:24.585703 containerd[1472]: time="2025-09-13T00:06:24.585106305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:24.587302 containerd[1472]: time="2025-09-13T00:06:24.586650953Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.170650125s" Sep 13 00:06:24.587302 containerd[1472]: time="2025-09-13T00:06:24.586711596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 00:06:24.587708 containerd[1472]: time="2025-09-13T00:06:24.587607082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:06:25.657107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777915575.mount: Deactivated successfully. Sep 13 00:06:26.173872 containerd[1472]: time="2025-09-13T00:06:26.173812869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:26.175183 containerd[1472]: time="2025-09-13T00:06:26.175076048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 00:06:26.175788 containerd[1472]: time="2025-09-13T00:06:26.175757187Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:26.177780 containerd[1472]: time="2025-09-13T00:06:26.177740267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:26.179002 containerd[1472]: time="2025-09-13T00:06:26.178972958Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.591335791s" Sep 13 00:06:26.179002 containerd[1472]: time="2025-09-13T00:06:26.179014107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 00:06:26.179491 containerd[1472]: time="2025-09-13T00:06:26.179470875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:06:26.357344 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 13 00:06:26.737594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589303266.mount: Deactivated successfully. Sep 13 00:06:27.540172 containerd[1472]: time="2025-09-13T00:06:27.540093090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:27.541884 containerd[1472]: time="2025-09-13T00:06:27.541800873Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 00:06:27.542811 containerd[1472]: time="2025-09-13T00:06:27.542762430Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:27.546093 containerd[1472]: time="2025-09-13T00:06:27.546022298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:27.548356 containerd[1472]: time="2025-09-13T00:06:27.548135046Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.368629924s" Sep 13 00:06:27.548356 containerd[1472]: time="2025-09-13T00:06:27.548202540Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 00:06:27.549163 containerd[1472]: time="2025-09-13T00:06:27.549118572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:06:28.029176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098560824.mount: Deactivated successfully. Sep 13 00:06:28.033300 containerd[1472]: time="2025-09-13T00:06:28.032359249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:28.033300 containerd[1472]: time="2025-09-13T00:06:28.032771601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:06:28.033300 containerd[1472]: time="2025-09-13T00:06:28.033256315Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:28.035596 containerd[1472]: time="2025-09-13T00:06:28.035564860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:28.036408 containerd[1472]: time="2025-09-13T00:06:28.036377142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 487.213059ms" Sep 13 00:06:28.036491 containerd[1472]: time="2025-09-13T00:06:28.036412224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:06:28.037961 containerd[1472]: time="2025-09-13T00:06:28.037940154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:06:28.496378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289177072.mount: Deactivated successfully. Sep 13 00:06:29.408897 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 13 00:06:30.167339 containerd[1472]: time="2025-09-13T00:06:30.167280698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:30.168312 containerd[1472]: time="2025-09-13T00:06:30.168244415Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 00:06:30.169159 containerd[1472]: time="2025-09-13T00:06:30.169127368Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:30.173077 containerd[1472]: time="2025-09-13T00:06:30.173024075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:30.174697 containerd[1472]: time="2025-09-13T00:06:30.174504587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.136431545s" Sep 13 00:06:30.174697 containerd[1472]: time="2025-09-13T00:06:30.174549995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 00:06:33.241822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:33.255129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:33.291902 systemd[1]: Reloading requested from client PID 2041 ('systemctl') (unit session-7.scope)... Sep 13 00:06:33.292124 systemd[1]: Reloading... Sep 13 00:06:33.421729 zram_generator::config[2080]: No configuration found. Sep 13 00:06:33.550076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:33.631969 systemd[1]: Reloading finished in 339 ms. Sep 13 00:06:33.675203 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:06:33.675547 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:06:33.675942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:33.683085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:33.836451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:33.847105 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:06:33.898569 kubelet[2133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:33.898569 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:33.898569 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:33.901390 kubelet[2133]: I0913 00:06:33.901323 2133 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:34.266583 kubelet[2133]: I0913 00:06:34.266453 2133 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:06:34.266583 kubelet[2133]: I0913 00:06:34.266487 2133 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:34.269706 kubelet[2133]: I0913 00:06:34.267932 2133 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:06:34.298812 kubelet[2133]: I0913 00:06:34.298766 2133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:34.301083 kubelet[2133]: E0913 00:06:34.301022 2133 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.198.49.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:06:34.314612 kubelet[2133]: E0913 00:06:34.314550 2133 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:34.314612 kubelet[2133]: I0913 00:06:34.314602 2133 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:34.322460 kubelet[2133]: I0913 00:06:34.322417 2133 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:34.323784 kubelet[2133]: I0913 00:06:34.323698 2133 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:34.327197 kubelet[2133]: I0913 00:06:34.323758 2133 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-3ba90871da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:34.327197 kubelet[2133]: I0913 00:06:34.327196 2133 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:34.327381 kubelet[2133]: I0913 00:06:34.327212 2133 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:06:34.327381 kubelet[2133]: I0913 00:06:34.327356 2133 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:34.329591 kubelet[2133]: I0913 00:06:34.329567 2133 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:06:34.329591 kubelet[2133]: I0913 00:06:34.329594 2133 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:34.329899 kubelet[2133]: I0913 00:06:34.329630 2133 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:06:34.332152 kubelet[2133]: I0913 00:06:34.331619 2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:34.340312 kubelet[2133]: E0913 00:06:34.340269 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.49.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-3ba90871da&limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:06:34.340884 kubelet[2133]: I0913 00:06:34.340862 2133 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:06:34.341664 kubelet[2133]: I0913 00:06:34.341639 2133 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:06:34.343094 kubelet[2133]: W0913 00:06:34.343041 2133 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:06:34.351522 kubelet[2133]: E0913 00:06:34.351452 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.49.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:06:34.353757 kubelet[2133]: I0913 00:06:34.352643 2133 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:06:34.353861 kubelet[2133]: I0913 00:06:34.353826 2133 server.go:1289] "Started kubelet" Sep 13 00:06:34.359878 kubelet[2133]: E0913 00:06:34.358788 2133 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.49.51:6443/api/v1/namespaces/default/events\": dial tcp 143.198.49.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-3ba90871da.1864aed74f58f75e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-3ba90871da,UID:ci-4081.3.5-n-3ba90871da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-3ba90871da,},FirstTimestamp:2025-09-13 00:06:34.353768286 +0000 UTC m=+0.501719455,LastTimestamp:2025-09-13 00:06:34.353768286 +0000 UTC m=+0.501719455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-3ba90871da,}" Sep 13 00:06:34.363543 kubelet[2133]: I0913 00:06:34.362821 2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:34.365630 kubelet[2133]: I0913 00:06:34.365589 2133 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:34.366773 kubelet[2133]: I0913 00:06:34.366756 2133 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:06:34.368306 kubelet[2133]: I0913 00:06:34.368246 2133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:34.368581 kubelet[2133]: I0913 00:06:34.368566 2133 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:34.371407 kubelet[2133]: I0913 00:06:34.370975 2133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:34.371757 kubelet[2133]: I0913 00:06:34.371743 2133 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:06:34.372581 kubelet[2133]: E0913 00:06:34.372559 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-3ba90871da\" not found" Sep 13 00:06:34.374296 kubelet[2133]: I0913 00:06:34.374278 2133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:06:34.374465 kubelet[2133]: I0913 00:06:34.374454 2133 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:34.375685 kubelet[2133]: E0913 00:06:34.375640 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.49.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-3ba90871da?timeout=10s\": dial tcp 143.198.49.51:6443: connect: connection refused" interval="200ms" Sep 13 00:06:34.377252 kubelet[2133]: I0913 00:06:34.376291 2133 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:06:34.377413 kubelet[2133]: I0913 00:06:34.377395 2133 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:34.377623 kubelet[2133]: E0913 00:06:34.376955 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.49.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:06:34.378008 kubelet[2133]: E0913 00:06:34.377072 2133 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:06:34.379565 kubelet[2133]: I0913 00:06:34.379549 2133 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:06:34.394098 kubelet[2133]: I0913 00:06:34.394074 2133 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:06:34.394560 kubelet[2133]: I0913 00:06:34.394316 2133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:34.394560 kubelet[2133]: I0913 00:06:34.394344 2133 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:34.396712 kubelet[2133]: I0913 00:06:34.396616 2133 policy_none.go:49] "None policy: Start" Sep 13 00:06:34.396712 kubelet[2133]: I0913 00:06:34.396638 2133 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:06:34.396712 kubelet[2133]: I0913 00:06:34.396649 2133 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:34.401597 kubelet[2133]: I0913 00:06:34.401554 2133 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:34.404043 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:06:34.405076 kubelet[2133]: I0913 00:06:34.405057 2133 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:34.405183 kubelet[2133]: I0913 00:06:34.405175 2133 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:06:34.405249 kubelet[2133]: I0913 00:06:34.405241 2133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:06:34.405310 kubelet[2133]: I0913 00:06:34.405303 2133 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:06:34.405422 kubelet[2133]: E0913 00:06:34.405398 2133 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:34.410292 kubelet[2133]: E0913 00:06:34.410267 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.49.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:06:34.427169 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:06:34.431970 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:06:34.438493 kubelet[2133]: E0913 00:06:34.438466 2133 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:06:34.438861 kubelet[2133]: I0913 00:06:34.438841 2133 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:34.439312 kubelet[2133]: I0913 00:06:34.439176 2133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:34.439672 kubelet[2133]: I0913 00:06:34.439649 2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:34.441024 kubelet[2133]: E0913 00:06:34.441007 2133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:06:34.441234 kubelet[2133]: E0913 00:06:34.441223 2133 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-3ba90871da\" not found" Sep 13 00:06:34.517483 systemd[1]: Created slice kubepods-burstable-pod53d0ba2a66302c9a7b5b323c7b2b95c9.slice - libcontainer container kubepods-burstable-pod53d0ba2a66302c9a7b5b323c7b2b95c9.slice. Sep 13 00:06:34.530866 kubelet[2133]: E0913 00:06:34.530823 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.534303 systemd[1]: Created slice kubepods-burstable-pod122a7dab6d9282d5653acbc4a7f62369.slice - libcontainer container kubepods-burstable-pod122a7dab6d9282d5653acbc4a7f62369.slice. Sep 13 00:06:34.540886 kubelet[2133]: I0913 00:06:34.540839 2133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.541317 kubelet[2133]: E0913 00:06:34.541265 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.49.51:6443/api/v1/nodes\": dial tcp 143.198.49.51:6443: connect: connection refused" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.542839 kubelet[2133]: E0913 00:06:34.542806 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.546020 systemd[1]: Created slice kubepods-burstable-pod003f4bf7f00c74e036a57ede0080d1ba.slice - libcontainer container kubepods-burstable-pod003f4bf7f00c74e036a57ede0080d1ba.slice. Sep 13 00:06:34.547974 kubelet[2133]: E0913 00:06:34.547941 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.579704 kubelet[2133]: E0913 00:06:34.578654 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.49.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-3ba90871da?timeout=10s\": dial tcp 143.198.49.51:6443: connect: connection refused" interval="400ms" Sep 13 00:06:34.676107 kubelet[2133]: I0913 00:06:34.676040 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.676390 kubelet[2133]: I0913 00:06:34.676343 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.676543 kubelet[2133]: I0913 00:06:34.676519 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/003f4bf7f00c74e036a57ede0080d1ba-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-3ba90871da\" (UID: \"003f4bf7f00c74e036a57ede0080d1ba\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.676719 kubelet[2133]: I0913 00:06:34.676667 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53d0ba2a66302c9a7b5b323c7b2b95c9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" (UID: \"53d0ba2a66302c9a7b5b323c7b2b95c9\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.676849 kubelet[2133]: I0913 00:06:34.676828 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.676952 kubelet[2133]: I0913 00:06:34.676935 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.677060 kubelet[2133]: I0913 00:06:34.677045 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.677171 kubelet[2133]: I0913 00:06:34.677155 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53d0ba2a66302c9a7b5b323c7b2b95c9-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" (UID: \"53d0ba2a66302c9a7b5b323c7b2b95c9\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.677275 kubelet[2133]: I0913 00:06:34.677260 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53d0ba2a66302c9a7b5b323c7b2b95c9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" (UID: \"53d0ba2a66302c9a7b5b323c7b2b95c9\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.743162 kubelet[2133]: I0913 00:06:34.743128 2133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.743889 kubelet[2133]: E0913 00:06:34.743856 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.49.51:6443/api/v1/nodes\": dial tcp 143.198.49.51:6443: connect: connection refused" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:34.831991 kubelet[2133]: E0913 00:06:34.831737 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:34.832747 containerd[1472]: time="2025-09-13T00:06:34.832553013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-3ba90871da,Uid:53d0ba2a66302c9a7b5b323c7b2b95c9,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:34.834437 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Sep 13 00:06:34.844492 kubelet[2133]: E0913 00:06:34.844158 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:34.850265 kubelet[2133]: E0913 00:06:34.849908 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:34.851890 containerd[1472]: time="2025-09-13T00:06:34.851836354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-3ba90871da,Uid:122a7dab6d9282d5653acbc4a7f62369,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:34.852170 containerd[1472]: time="2025-09-13T00:06:34.852136481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-3ba90871da,Uid:003f4bf7f00c74e036a57ede0080d1ba,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:34.979744 kubelet[2133]: E0913 00:06:34.979636 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.49.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-3ba90871da?timeout=10s\": dial tcp 143.198.49.51:6443: connect: connection refused" interval="800ms" Sep 13 00:06:35.145803 kubelet[2133]: I0913 00:06:35.145370 2133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:35.145944 kubelet[2133]: E0913 00:06:35.145824 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.49.51:6443/api/v1/nodes\": dial tcp 143.198.49.51:6443: connect: connection refused" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:35.183364 kubelet[2133]: E0913 00:06:35.183320 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.198.49.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-3ba90871da&limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:06:35.219477 kubelet[2133]: E0913 00:06:35.219393 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.198.49.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:06:35.267193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738948544.mount: Deactivated successfully. Sep 13 00:06:35.270389 containerd[1472]: time="2025-09-13T00:06:35.270336376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:35.271668 containerd[1472]: time="2025-09-13T00:06:35.271622625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:06:35.272385 containerd[1472]: time="2025-09-13T00:06:35.272357223Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:35.274793 containerd[1472]: time="2025-09-13T00:06:35.274240094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:35.274793 containerd[1472]: time="2025-09-13T00:06:35.274415042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:06:35.275420 containerd[1472]: time="2025-09-13T00:06:35.275384632Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:06:35.275827 containerd[1472]: time="2025-09-13T00:06:35.275803301Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:35.277734 containerd[1472]: time="2025-09-13T00:06:35.277698384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.338796ms" Sep 13 00:06:35.279655 containerd[1472]: time="2025-09-13T00:06:35.279608876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 427.663833ms" Sep 13 00:06:35.282790 containerd[1472]: time="2025-09-13T00:06:35.281860914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:06:35.286699 containerd[1472]: time="2025-09-13T00:06:35.286647778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.000169ms" Sep 13 00:06:35.432817 containerd[1472]: time="2025-09-13T00:06:35.432066366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:35.432817 containerd[1472]: time="2025-09-13T00:06:35.432121655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:35.432817 containerd[1472]: time="2025-09-13T00:06:35.432154911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:35.432817 containerd[1472]: time="2025-09-13T00:06:35.432263738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:35.436486 containerd[1472]: time="2025-09-13T00:06:35.436207077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:35.436486 containerd[1472]: time="2025-09-13T00:06:35.436268179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:35.436486 containerd[1472]: time="2025-09-13T00:06:35.436301339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:35.436486 containerd[1472]: time="2025-09-13T00:06:35.436405250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:35.445421 containerd[1472]: time="2025-09-13T00:06:35.445094651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:35.445421 containerd[1472]: time="2025-09-13T00:06:35.445167050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:35.445421 containerd[1472]: time="2025-09-13T00:06:35.445188376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:35.445421 containerd[1472]: time="2025-09-13T00:06:35.445298704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:35.467924 systemd[1]: Started cri-containerd-6a6e4ef2fe5a9abcbd0d9a0a6d11ca75a59120faa17bc660a1cc04fe639fd962.scope - libcontainer container 6a6e4ef2fe5a9abcbd0d9a0a6d11ca75a59120faa17bc660a1cc04fe639fd962. Sep 13 00:06:35.488885 systemd[1]: Started cri-containerd-94281431e33d8698838422a36c41779c91b1c13dfa778f805c64c2c9bcff68d4.scope - libcontainer container 94281431e33d8698838422a36c41779c91b1c13dfa778f805c64c2c9bcff68d4. Sep 13 00:06:35.496656 systemd[1]: Started cri-containerd-e9254d64fd3803491037cfafdaad76dfa4b08bcace0ae49300f6b928b631f36d.scope - libcontainer container e9254d64fd3803491037cfafdaad76dfa4b08bcace0ae49300f6b928b631f36d. Sep 13 00:06:35.569022 containerd[1472]: time="2025-09-13T00:06:35.568852856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-3ba90871da,Uid:122a7dab6d9282d5653acbc4a7f62369,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a6e4ef2fe5a9abcbd0d9a0a6d11ca75a59120faa17bc660a1cc04fe639fd962\"" Sep 13 00:06:35.572845 kubelet[2133]: E0913 00:06:35.572490 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:35.583861 containerd[1472]: time="2025-09-13T00:06:35.583020536Z" level=info msg="CreateContainer within sandbox \"6a6e4ef2fe5a9abcbd0d9a0a6d11ca75a59120faa17bc660a1cc04fe639fd962\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:06:35.587909 containerd[1472]: time="2025-09-13T00:06:35.587855316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-3ba90871da,Uid:53d0ba2a66302c9a7b5b323c7b2b95c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9254d64fd3803491037cfafdaad76dfa4b08bcace0ae49300f6b928b631f36d\"" Sep 13 00:06:35.589376 kubelet[2133]: E0913 00:06:35.589349 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:35.591511 containerd[1472]: time="2025-09-13T00:06:35.590819982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-3ba90871da,Uid:003f4bf7f00c74e036a57ede0080d1ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"94281431e33d8698838422a36c41779c91b1c13dfa778f805c64c2c9bcff68d4\"" Sep 13 00:06:35.592403 kubelet[2133]: E0913 00:06:35.592346 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:35.594515 containerd[1472]: time="2025-09-13T00:06:35.594392104Z" level=info msg="CreateContainer within sandbox \"e9254d64fd3803491037cfafdaad76dfa4b08bcace0ae49300f6b928b631f36d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:06:35.596021 containerd[1472]: time="2025-09-13T00:06:35.595960080Z" level=info msg="CreateContainer within sandbox \"94281431e33d8698838422a36c41779c91b1c13dfa778f805c64c2c9bcff68d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:06:35.606789 containerd[1472]: time="2025-09-13T00:06:35.606364017Z" level=info msg="CreateContainer within sandbox \"6a6e4ef2fe5a9abcbd0d9a0a6d11ca75a59120faa17bc660a1cc04fe639fd962\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e32d8507d6fb9a130de3ba375086bf3ec5d3bbc05c98fdeac98d8bff09c4c44\"" Sep 13 00:06:35.607670 containerd[1472]: time="2025-09-13T00:06:35.607639678Z" level=info msg="StartContainer for \"6e32d8507d6fb9a130de3ba375086bf3ec5d3bbc05c98fdeac98d8bff09c4c44\"" Sep 13 00:06:35.618290 containerd[1472]: time="2025-09-13T00:06:35.617023705Z" level=info msg="CreateContainer within sandbox \"94281431e33d8698838422a36c41779c91b1c13dfa778f805c64c2c9bcff68d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09f2e9585a923d4bac864ab5ef5f64038c093b647af6c5ce13337a6a40f5c916\"" Sep 13 00:06:35.618953 containerd[1472]: time="2025-09-13T00:06:35.618913548Z" level=info msg="CreateContainer within sandbox \"e9254d64fd3803491037cfafdaad76dfa4b08bcace0ae49300f6b928b631f36d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c200e606597a8a3f159d50f428ea745848d2cfa094b33dbf99c76d1ae578afb4\"" Sep 13 00:06:35.619244 containerd[1472]: time="2025-09-13T00:06:35.619105774Z" level=info msg="StartContainer for \"09f2e9585a923d4bac864ab5ef5f64038c093b647af6c5ce13337a6a40f5c916\"" Sep 13 00:06:35.620759 containerd[1472]: time="2025-09-13T00:06:35.620724468Z" level=info msg="StartContainer for \"c200e606597a8a3f159d50f428ea745848d2cfa094b33dbf99c76d1ae578afb4\"" Sep 13 00:06:35.659976 systemd[1]: Started cri-containerd-6e32d8507d6fb9a130de3ba375086bf3ec5d3bbc05c98fdeac98d8bff09c4c44.scope - libcontainer container 6e32d8507d6fb9a130de3ba375086bf3ec5d3bbc05c98fdeac98d8bff09c4c44. Sep 13 00:06:35.670219 systemd[1]: Started cri-containerd-c200e606597a8a3f159d50f428ea745848d2cfa094b33dbf99c76d1ae578afb4.scope - libcontainer container c200e606597a8a3f159d50f428ea745848d2cfa094b33dbf99c76d1ae578afb4. Sep 13 00:06:35.690727 systemd[1]: Started cri-containerd-09f2e9585a923d4bac864ab5ef5f64038c093b647af6c5ce13337a6a40f5c916.scope - libcontainer container 09f2e9585a923d4bac864ab5ef5f64038c093b647af6c5ce13337a6a40f5c916. Sep 13 00:06:35.745029 containerd[1472]: time="2025-09-13T00:06:35.744978859Z" level=info msg="StartContainer for \"6e32d8507d6fb9a130de3ba375086bf3ec5d3bbc05c98fdeac98d8bff09c4c44\" returns successfully" Sep 13 00:06:35.771095 containerd[1472]: time="2025-09-13T00:06:35.771026543Z" level=info msg="StartContainer for \"c200e606597a8a3f159d50f428ea745848d2cfa094b33dbf99c76d1ae578afb4\" returns successfully" Sep 13 00:06:35.780789 kubelet[2133]: E0913 00:06:35.780205 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.49.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-3ba90871da?timeout=10s\": dial tcp 143.198.49.51:6443: connect: connection refused" interval="1.6s" Sep 13 00:06:35.793452 kubelet[2133]: E0913 00:06:35.793404 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.198.49.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:06:35.798108 containerd[1472]: time="2025-09-13T00:06:35.798068996Z" level=info msg="StartContainer for \"09f2e9585a923d4bac864ab5ef5f64038c093b647af6c5ce13337a6a40f5c916\" returns successfully" Sep 13 00:06:35.898787 kubelet[2133]: E0913 00:06:35.898740 2133 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.198.49.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.49.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:06:35.947825 kubelet[2133]: I0913 00:06:35.947265 2133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:35.947825 kubelet[2133]: E0913 00:06:35.947673 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.49.51:6443/api/v1/nodes\": dial tcp 143.198.49.51:6443: connect: connection refused" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:36.433823 kubelet[2133]: E0913 00:06:36.432177 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:36.433823 kubelet[2133]: E0913 00:06:36.432312 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:36.436415 kubelet[2133]: E0913 00:06:36.436386 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:36.436524 kubelet[2133]: E0913 00:06:36.436513 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:36.439542 kubelet[2133]: E0913 00:06:36.439513 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:36.439749 kubelet[2133]: E0913 00:06:36.439654 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:37.441870 kubelet[2133]: E0913 00:06:37.441832 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:37.442376 kubelet[2133]: E0913 00:06:37.441989 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:37.442415 kubelet[2133]: E0913 00:06:37.442379 2133 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:37.442502 kubelet[2133]: E0913 00:06:37.442483 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:37.549576 kubelet[2133]: I0913 00:06:37.549537 2133 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.052978 kubelet[2133]: E0913 00:06:38.052939 2133 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-3ba90871da\" not found" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.150765 kubelet[2133]: I0913 00:06:38.148408 2133 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.150765 kubelet[2133]: E0913 00:06:38.148461 2133 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-3ba90871da\": node \"ci-4081.3.5-n-3ba90871da\" not found" Sep 13 00:06:38.173898 kubelet[2133]: I0913 00:06:38.173858 2133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.182247 kubelet[2133]: E0913 00:06:38.182150 2133 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-3ba90871da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.182247 kubelet[2133]: I0913 00:06:38.182216 2133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.185124 kubelet[2133]: E0913 00:06:38.185057 2133 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.185124 kubelet[2133]: I0913 00:06:38.185089 2133 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.187153 kubelet[2133]: E0913 00:06:38.187112 2133 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:38.353328 kubelet[2133]: I0913 00:06:38.352999 2133 apiserver.go:52] "Watching apiserver" Sep 13 00:06:38.374554 kubelet[2133]: I0913 00:06:38.374519 2133 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:06:40.299766 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... Sep 13 00:06:40.299785 systemd[1]: Reloading... Sep 13 00:06:40.409082 zram_generator::config[2454]: No configuration found. Sep 13 00:06:40.534424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:06:40.638480 systemd[1]: Reloading finished in 338 ms. Sep 13 00:06:40.689795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:40.701356 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:06:40.701712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:40.708285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:06:40.840003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:06:40.852361 (kubelet)[2502]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:06:40.934629 kubelet[2502]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:40.934629 kubelet[2502]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:06:40.934629 kubelet[2502]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:06:40.934629 kubelet[2502]: I0913 00:06:40.933621 2502 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:06:40.945459 kubelet[2502]: I0913 00:06:40.943614 2502 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:06:40.945459 kubelet[2502]: I0913 00:06:40.943654 2502 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:06:40.945459 kubelet[2502]: I0913 00:06:40.943962 2502 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:06:40.945987 kubelet[2502]: I0913 00:06:40.945964 2502 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:06:40.952973 kubelet[2502]: I0913 00:06:40.952939 2502 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:06:40.959690 kubelet[2502]: E0913 00:06:40.959630 2502 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:06:40.959860 kubelet[2502]: I0913 00:06:40.959849 2502 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:06:40.963491 kubelet[2502]: I0913 00:06:40.963458 2502 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:06:40.963928 kubelet[2502]: I0913 00:06:40.963897 2502 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:06:40.964206 kubelet[2502]: I0913 00:06:40.964030 2502 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-3ba90871da","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:06:40.964337 kubelet[2502]: I0913 00:06:40.964325 2502 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:06:40.964384 kubelet[2502]: I0913 00:06:40.964378 2502 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:06:40.964499 kubelet[2502]: I0913 00:06:40.964487 2502 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:40.964791 kubelet[2502]: I0913 00:06:40.964775 2502 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:06:40.964939 kubelet[2502]: I0913 00:06:40.964928 2502 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:06:40.965030 kubelet[2502]: I0913 00:06:40.965020 2502 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:06:40.965096 kubelet[2502]: I0913 00:06:40.965088 2502 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:06:40.970709 kubelet[2502]: I0913 00:06:40.968807 2502 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:06:40.970709 kubelet[2502]: I0913 00:06:40.969320 2502 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:06:40.974228 kubelet[2502]: I0913 00:06:40.974199 2502 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:06:40.974360 kubelet[2502]: I0913 00:06:40.974249 2502 server.go:1289] "Started kubelet" Sep 13 00:06:40.982718 kubelet[2502]: E0913 00:06:40.982674 2502 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:06:40.982862 kubelet[2502]: I0913 00:06:40.982696 2502 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:06:40.986921 kubelet[2502]: I0913 00:06:40.986885 2502 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:06:40.989299 kubelet[2502]: I0913 00:06:40.982778 2502 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:06:40.989542 kubelet[2502]: I0913 00:06:40.989519 2502 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:06:40.996183 kubelet[2502]: I0913 00:06:40.996065 2502 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:06:40.996972 kubelet[2502]: I0913 00:06:40.982737 2502 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:06:40.997993 kubelet[2502]: I0913 00:06:40.997953 2502 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:06:40.999779 kubelet[2502]: I0913 00:06:40.999751 2502 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:06:40.999890 kubelet[2502]: I0913 00:06:40.999879 2502 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:06:41.001652 kubelet[2502]: I0913 00:06:41.001628 2502 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:06:41.002364 kubelet[2502]: I0913 00:06:41.002331 2502 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:06:41.007341 kubelet[2502]: I0913 00:06:41.006003 2502 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:06:41.009618 kubelet[2502]: I0913 00:06:41.009567 2502 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:06:41.012327 kubelet[2502]: I0913 00:06:41.012290 2502 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:06:41.012327 kubelet[2502]: I0913 00:06:41.012324 2502 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:06:41.012489 kubelet[2502]: I0913 00:06:41.012351 2502 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:06:41.012489 kubelet[2502]: I0913 00:06:41.012359 2502 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:06:41.012489 kubelet[2502]: E0913 00:06:41.012409 2502 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:06:41.069972 kubelet[2502]: I0913 00:06:41.069937 2502 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:06:41.069972 kubelet[2502]: I0913 00:06:41.069957 2502 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:06:41.069972 kubelet[2502]: I0913 00:06:41.069980 2502 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:06:41.070170 kubelet[2502]: I0913 00:06:41.070136 2502 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:06:41.070170 kubelet[2502]: I0913 00:06:41.070147 2502 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:06:41.070170 kubelet[2502]: I0913 00:06:41.070167 2502 policy_none.go:49] "None policy: Start" Sep 13 00:06:41.070251 kubelet[2502]: I0913 00:06:41.070178 2502 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:06:41.070251 kubelet[2502]: I0913 00:06:41.070188 2502 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:06:41.070321 kubelet[2502]: I0913 00:06:41.070308 2502 state_mem.go:75] "Updated machine memory state" Sep 13 00:06:41.074965 kubelet[2502]: E0913 00:06:41.074934 2502 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:06:41.075173 kubelet[2502]: I0913 00:06:41.075159 2502 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:06:41.075226 kubelet[2502]: I0913 00:06:41.075175 2502 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:06:41.077525 kubelet[2502]: I0913 00:06:41.077147 2502 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:06:41.081860 kubelet[2502]: E0913 00:06:41.081115 2502 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:06:41.114879 kubelet[2502]: I0913 00:06:41.114402 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.114879 kubelet[2502]: I0913 00:06:41.114473 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.114879 kubelet[2502]: I0913 00:06:41.114704 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.122001 kubelet[2502]: I0913 00:06:41.121910 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:06:41.124661 kubelet[2502]: I0913 00:06:41.124395 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:06:41.124661 kubelet[2502]: I0913 00:06:41.124410 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:06:41.181666 kubelet[2502]: I0913 00:06:41.181619 2502 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.193913 kubelet[2502]: I0913 00:06:41.192907 2502 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.193913 kubelet[2502]: I0913 00:06:41.193033 2502 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200343 kubelet[2502]: I0913 00:06:41.200252 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53d0ba2a66302c9a7b5b323c7b2b95c9-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" (UID: \"53d0ba2a66302c9a7b5b323c7b2b95c9\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200343 kubelet[2502]: I0913 00:06:41.200287 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200343 kubelet[2502]: I0913 00:06:41.200308 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200343 kubelet[2502]: I0913 00:06:41.200330 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200562 kubelet[2502]: I0913 00:06:41.200362 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200562 kubelet[2502]: I0913 00:06:41.200379 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53d0ba2a66302c9a7b5b323c7b2b95c9-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" (UID: \"53d0ba2a66302c9a7b5b323c7b2b95c9\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200562 kubelet[2502]: I0913 00:06:41.200404 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53d0ba2a66302c9a7b5b323c7b2b95c9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-3ba90871da\" (UID: \"53d0ba2a66302c9a7b5b323c7b2b95c9\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200562 kubelet[2502]: I0913 00:06:41.200422 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/122a7dab6d9282d5653acbc4a7f62369-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" (UID: \"122a7dab6d9282d5653acbc4a7f62369\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.200562 kubelet[2502]: I0913 00:06:41.200450 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/003f4bf7f00c74e036a57ede0080d1ba-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-3ba90871da\" (UID: \"003f4bf7f00c74e036a57ede0080d1ba\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:41.422641 kubelet[2502]: E0913 00:06:41.422524 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:41.425112 kubelet[2502]: E0913 00:06:41.424939 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:41.425112 kubelet[2502]: E0913 00:06:41.425032 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:41.969823 kubelet[2502]: I0913 00:06:41.968799 2502 apiserver.go:52] "Watching apiserver" Sep 13 00:06:42.000237 kubelet[2502]: I0913 00:06:42.000166 2502 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:06:42.048925 kubelet[2502]: E0913 00:06:42.048888 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:42.049570 kubelet[2502]: I0913 00:06:42.049548 2502 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:42.049962 kubelet[2502]: E0913 00:06:42.049943 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:42.062900 kubelet[2502]: I0913 00:06:42.062514 2502 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:06:42.062900 kubelet[2502]: E0913 00:06:42.062607 2502 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-3ba90871da\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" Sep 13 00:06:42.063237 kubelet[2502]: E0913 00:06:42.063174 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:42.106969 kubelet[2502]: I0913 00:06:42.106902 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-3ba90871da" podStartSLOduration=1.1068664990000001 podStartE2EDuration="1.106866499s" podCreationTimestamp="2025-09-13 00:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:42.091189651 +0000 UTC m=+1.231187395" watchObservedRunningTime="2025-09-13 00:06:42.106866499 +0000 UTC m=+1.246864236" Sep 13 00:06:42.124880 kubelet[2502]: I0913 00:06:42.124415 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-3ba90871da" podStartSLOduration=1.124394898 podStartE2EDuration="1.124394898s" podCreationTimestamp="2025-09-13 00:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:42.10828271 +0000 UTC m=+1.248280454" watchObservedRunningTime="2025-09-13 00:06:42.124394898 +0000 UTC m=+1.264392633" Sep 13 00:06:42.125096 kubelet[2502]: I0913 00:06:42.124890 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-3ba90871da" podStartSLOduration=1.124836366 podStartE2EDuration="1.124836366s" podCreationTimestamp="2025-09-13 00:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:42.124840983 +0000 UTC m=+1.264838713" watchObservedRunningTime="2025-09-13 00:06:42.124836366 +0000 UTC m=+1.264834111" Sep 13 00:06:43.050946 kubelet[2502]: E0913 00:06:43.050898 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:43.052189 kubelet[2502]: E0913 00:06:43.051439 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:43.052189 kubelet[2502]: E0913 00:06:43.051669 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:44.054912 kubelet[2502]: E0913 00:06:44.054779 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:44.929626 kubelet[2502]: E0913 00:06:44.929588 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:45.157741 kubelet[2502]: E0913 00:06:45.157358 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:46.057976 kubelet[2502]: E0913 00:06:46.057902 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:46.741533 kubelet[2502]: I0913 00:06:46.741484 2502 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:06:46.742210 containerd[1472]: time="2025-09-13T00:06:46.742156486Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:06:46.742746 kubelet[2502]: I0913 00:06:46.742718 2502 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:06:47.637711 systemd[1]: Created slice kubepods-besteffort-podef87a9f9_07e7_4149_9611_ece4a5534c90.slice - libcontainer container kubepods-besteffort-podef87a9f9_07e7_4149_9611_ece4a5534c90.slice. Sep 13 00:06:47.640622 kubelet[2502]: I0913 00:06:47.640579 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef87a9f9-07e7-4149-9611-ece4a5534c90-xtables-lock\") pod \"kube-proxy-t4nzh\" (UID: \"ef87a9f9-07e7-4149-9611-ece4a5534c90\") " pod="kube-system/kube-proxy-t4nzh" Sep 13 00:06:47.640622 kubelet[2502]: I0913 00:06:47.640623 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfd9x\" (UniqueName: \"kubernetes.io/projected/ef87a9f9-07e7-4149-9611-ece4a5534c90-kube-api-access-wfd9x\") pod \"kube-proxy-t4nzh\" (UID: \"ef87a9f9-07e7-4149-9611-ece4a5534c90\") " pod="kube-system/kube-proxy-t4nzh" Sep 13 00:06:47.641620 kubelet[2502]: I0913 00:06:47.640891 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef87a9f9-07e7-4149-9611-ece4a5534c90-kube-proxy\") pod \"kube-proxy-t4nzh\" (UID: \"ef87a9f9-07e7-4149-9611-ece4a5534c90\") " pod="kube-system/kube-proxy-t4nzh" Sep 13 00:06:47.641620 kubelet[2502]: I0913 00:06:47.640927 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef87a9f9-07e7-4149-9611-ece4a5534c90-lib-modules\") pod \"kube-proxy-t4nzh\" (UID: \"ef87a9f9-07e7-4149-9611-ece4a5534c90\") " pod="kube-system/kube-proxy-t4nzh" Sep 13 00:06:47.934144 systemd[1]: Created slice kubepods-besteffort-pod5f84d2de_582b_4ae3_b57a_fc161b394971.slice - libcontainer container kubepods-besteffort-pod5f84d2de_582b_4ae3_b57a_fc161b394971.slice. Sep 13 00:06:47.942704 kubelet[2502]: E0913 00:06:47.942290 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:47.943743 kubelet[2502]: I0913 00:06:47.942765 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5lq\" (UniqueName: \"kubernetes.io/projected/5f84d2de-582b-4ae3-b57a-fc161b394971-kube-api-access-fr5lq\") pod \"tigera-operator-755d956888-knhxn\" (UID: \"5f84d2de-582b-4ae3-b57a-fc161b394971\") " pod="tigera-operator/tigera-operator-755d956888-knhxn" Sep 13 00:06:47.943743 kubelet[2502]: I0913 00:06:47.943248 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f84d2de-582b-4ae3-b57a-fc161b394971-var-lib-calico\") pod \"tigera-operator-755d956888-knhxn\" (UID: \"5f84d2de-582b-4ae3-b57a-fc161b394971\") " pod="tigera-operator/tigera-operator-755d956888-knhxn" Sep 13 00:06:47.944699 containerd[1472]: time="2025-09-13T00:06:47.943890630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4nzh,Uid:ef87a9f9-07e7-4149-9611-ece4a5534c90,Namespace:kube-system,Attempt:0,}" Sep 13 00:06:47.976455 containerd[1472]: time="2025-09-13T00:06:47.976333358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:47.977192 containerd[1472]: time="2025-09-13T00:06:47.977142603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:47.978183 containerd[1472]: time="2025-09-13T00:06:47.977203058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:47.978183 containerd[1472]: time="2025-09-13T00:06:47.978093382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:47.999994 systemd[1]: run-containerd-runc-k8s.io-bf55dff346d68d781421c4d6927abdefaa654909e3e0268429aa379ace47b477-runc.JmZo5q.mount: Deactivated successfully. Sep 13 00:06:48.010947 systemd[1]: Started cri-containerd-bf55dff346d68d781421c4d6927abdefaa654909e3e0268429aa379ace47b477.scope - libcontainer container bf55dff346d68d781421c4d6927abdefaa654909e3e0268429aa379ace47b477. Sep 13 00:06:48.042781 containerd[1472]: time="2025-09-13T00:06:48.042710460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4nzh,Uid:ef87a9f9-07e7-4149-9611-ece4a5534c90,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf55dff346d68d781421c4d6927abdefaa654909e3e0268429aa379ace47b477\"" Sep 13 00:06:48.044793 kubelet[2502]: E0913 00:06:48.044295 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:48.052085 containerd[1472]: time="2025-09-13T00:06:48.051158353Z" level=info msg="CreateContainer within sandbox \"bf55dff346d68d781421c4d6927abdefaa654909e3e0268429aa379ace47b477\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:06:48.065354 containerd[1472]: time="2025-09-13T00:06:48.065291096Z" level=info msg="CreateContainer within sandbox \"bf55dff346d68d781421c4d6927abdefaa654909e3e0268429aa379ace47b477\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4144d086fe8d8f024faff4af9f4651a3505bbd2037f9015f4522c06b6531c6f6\"" Sep 13 00:06:48.066879 containerd[1472]: time="2025-09-13T00:06:48.066026777Z" level=info msg="StartContainer for \"4144d086fe8d8f024faff4af9f4651a3505bbd2037f9015f4522c06b6531c6f6\"" Sep 13 00:06:48.099924 systemd[1]: Started cri-containerd-4144d086fe8d8f024faff4af9f4651a3505bbd2037f9015f4522c06b6531c6f6.scope - libcontainer container 4144d086fe8d8f024faff4af9f4651a3505bbd2037f9015f4522c06b6531c6f6. Sep 13 00:06:48.150154 containerd[1472]: time="2025-09-13T00:06:48.149987686Z" level=info msg="StartContainer for \"4144d086fe8d8f024faff4af9f4651a3505bbd2037f9015f4522c06b6531c6f6\" returns successfully" Sep 13 00:06:48.241458 containerd[1472]: time="2025-09-13T00:06:48.241331068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-knhxn,Uid:5f84d2de-582b-4ae3-b57a-fc161b394971,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:06:48.276295 containerd[1472]: time="2025-09-13T00:06:48.275872173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:06:48.276295 containerd[1472]: time="2025-09-13T00:06:48.275972591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:06:48.276295 containerd[1472]: time="2025-09-13T00:06:48.275998324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.276295 containerd[1472]: time="2025-09-13T00:06:48.276098436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:06:48.309918 systemd[1]: Started cri-containerd-76b95eafaee839ea4b2f908751ab299aa68cc1434ade21b2e67b6ba21b93f8de.scope - libcontainer container 76b95eafaee839ea4b2f908751ab299aa68cc1434ade21b2e67b6ba21b93f8de. Sep 13 00:06:48.366531 containerd[1472]: time="2025-09-13T00:06:48.366198865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-knhxn,Uid:5f84d2de-582b-4ae3-b57a-fc161b394971,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"76b95eafaee839ea4b2f908751ab299aa68cc1434ade21b2e67b6ba21b93f8de\"" Sep 13 00:06:48.370146 containerd[1472]: time="2025-09-13T00:06:48.370075791Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:06:49.066514 kubelet[2502]: E0913 00:06:49.066186 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:49.395726 systemd-timesyncd[1341]: Contacted time server 23.186.168.130:123 (2.flatcar.pool.ntp.org). Sep 13 00:06:49.395821 systemd-timesyncd[1341]: Initial clock synchronization to Sat 2025-09-13 00:06:49.722731 UTC. Sep 13 00:06:49.632839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567477331.mount: Deactivated successfully. Sep 13 00:06:50.071806 kubelet[2502]: E0913 00:06:50.071714 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:51.049729 kubelet[2502]: I0913 00:06:51.049183 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4nzh" podStartSLOduration=4.049157591 podStartE2EDuration="4.049157591s" podCreationTimestamp="2025-09-13 00:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:06:49.082560657 +0000 UTC m=+8.222558405" watchObservedRunningTime="2025-09-13 00:06:51.049157591 +0000 UTC m=+10.189155337" Sep 13 00:06:51.518606 containerd[1472]: time="2025-09-13T00:06:51.516843661Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:06:51.518606 containerd[1472]: time="2025-09-13T00:06:51.518460148Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:51.522221 containerd[1472]: time="2025-09-13T00:06:51.522168037Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.152020255s" Sep 13 00:06:51.522430 containerd[1472]: time="2025-09-13T00:06:51.522408149Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:06:51.522566 containerd[1472]: time="2025-09-13T00:06:51.522332998Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:51.523595 containerd[1472]: time="2025-09-13T00:06:51.523568949Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:06:51.527978 containerd[1472]: time="2025-09-13T00:06:51.527931102Z" level=info msg="CreateContainer within sandbox \"76b95eafaee839ea4b2f908751ab299aa68cc1434ade21b2e67b6ba21b93f8de\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:06:51.540216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371964134.mount: Deactivated successfully. Sep 13 00:06:51.549064 containerd[1472]: time="2025-09-13T00:06:51.549009234Z" level=info msg="CreateContainer within sandbox \"76b95eafaee839ea4b2f908751ab299aa68cc1434ade21b2e67b6ba21b93f8de\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"263b3506c70716319e46e114d67cb59ef1be373627c41b21056775437b778f6a\"" Sep 13 00:06:51.550722 containerd[1472]: time="2025-09-13T00:06:51.549773703Z" level=info msg="StartContainer for \"263b3506c70716319e46e114d67cb59ef1be373627c41b21056775437b778f6a\"" Sep 13 00:06:51.584037 systemd[1]: run-containerd-runc-k8s.io-263b3506c70716319e46e114d67cb59ef1be373627c41b21056775437b778f6a-runc.VFBPQ1.mount: Deactivated successfully. Sep 13 00:06:51.591929 systemd[1]: Started cri-containerd-263b3506c70716319e46e114d67cb59ef1be373627c41b21056775437b778f6a.scope - libcontainer container 263b3506c70716319e46e114d67cb59ef1be373627c41b21056775437b778f6a. Sep 13 00:06:51.624396 containerd[1472]: time="2025-09-13T00:06:51.624352303Z" level=info msg="StartContainer for \"263b3506c70716319e46e114d67cb59ef1be373627c41b21056775437b778f6a\" returns successfully" Sep 13 00:06:52.655520 kubelet[2502]: E0913 00:06:52.655474 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:52.671874 kubelet[2502]: I0913 00:06:52.671501 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-knhxn" podStartSLOduration=2.516359157 podStartE2EDuration="5.671476288s" podCreationTimestamp="2025-09-13 00:06:47 +0000 UTC" firstStartedPulling="2025-09-13 00:06:48.368604937 +0000 UTC m=+7.508602673" lastFinishedPulling="2025-09-13 00:06:51.52372208 +0000 UTC m=+10.663719804" observedRunningTime="2025-09-13 00:06:52.095092572 +0000 UTC m=+11.235090318" watchObservedRunningTime="2025-09-13 00:06:52.671476288 +0000 UTC m=+11.811474012" Sep 13 00:06:53.079486 kubelet[2502]: E0913 00:06:53.079333 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:54.944986 kubelet[2502]: E0913 00:06:54.944852 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:55.082346 kubelet[2502]: E0913 00:06:55.082308 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:06:55.738807 update_engine[1446]: I20250913 00:06:55.738090 1446 update_attempter.cc:509] Updating boot flags... Sep 13 00:06:55.800909 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2863) Sep 13 00:06:55.895793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2865) Sep 13 00:06:57.009293 sudo[1648]: pam_unix(sudo:session): session closed for user root Sep 13 00:06:57.015269 sshd[1645]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:57.021622 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:06:57.022048 systemd[1]: sshd@6-143.198.49.51:22-139.178.68.195:54964.service: Deactivated successfully. Sep 13 00:06:57.027961 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:06:57.028166 systemd[1]: session-7.scope: Consumed 5.302s CPU time, 144.2M memory peak, 0B memory swap peak. Sep 13 00:06:57.035069 systemd-logind[1445]: Removed session 7. Sep 13 00:07:01.207114 systemd[1]: Created slice kubepods-besteffort-pod0caf781c_f956_4bd1_9df5_41359ca9833c.slice - libcontainer container kubepods-besteffort-pod0caf781c_f956_4bd1_9df5_41359ca9833c.slice. Sep 13 00:07:01.237055 kubelet[2502]: I0913 00:07:01.237008 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0caf781c-f956-4bd1-9df5-41359ca9833c-typha-certs\") pod \"calico-typha-bf7cc9b88-bfwcq\" (UID: \"0caf781c-f956-4bd1-9df5-41359ca9833c\") " pod="calico-system/calico-typha-bf7cc9b88-bfwcq" Sep 13 00:07:01.237055 kubelet[2502]: I0913 00:07:01.237053 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56n5m\" (UniqueName: \"kubernetes.io/projected/0caf781c-f956-4bd1-9df5-41359ca9833c-kube-api-access-56n5m\") pod \"calico-typha-bf7cc9b88-bfwcq\" (UID: \"0caf781c-f956-4bd1-9df5-41359ca9833c\") " pod="calico-system/calico-typha-bf7cc9b88-bfwcq" Sep 13 00:07:01.237055 kubelet[2502]: I0913 00:07:01.237074 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caf781c-f956-4bd1-9df5-41359ca9833c-tigera-ca-bundle\") pod \"calico-typha-bf7cc9b88-bfwcq\" (UID: \"0caf781c-f956-4bd1-9df5-41359ca9833c\") " pod="calico-system/calico-typha-bf7cc9b88-bfwcq" Sep 13 00:07:01.515504 kubelet[2502]: E0913 00:07:01.515337 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:01.518376 containerd[1472]: time="2025-09-13T00:07:01.517425204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf7cc9b88-bfwcq,Uid:0caf781c-f956-4bd1-9df5-41359ca9833c,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:01.523745 systemd[1]: Created slice kubepods-besteffort-pod5ffa0c6f_405d_4128_8557_fee8827ddf90.slice - libcontainer container kubepods-besteffort-pod5ffa0c6f_405d_4128_8557_fee8827ddf90.slice. Sep 13 00:07:01.542727 kubelet[2502]: I0913 00:07:01.539712 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ffa0c6f-405d-4128-8557-fee8827ddf90-node-certs\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.542727 kubelet[2502]: I0913 00:07:01.539772 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-policysync\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.542727 kubelet[2502]: I0913 00:07:01.539796 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-var-lib-calico\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.542727 kubelet[2502]: I0913 00:07:01.539823 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpvv\" (UniqueName: \"kubernetes.io/projected/5ffa0c6f-405d-4128-8557-fee8827ddf90-kube-api-access-jlpvv\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.542727 kubelet[2502]: I0913 00:07:01.539848 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-cni-log-dir\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543101 kubelet[2502]: I0913 00:07:01.539877 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ffa0c6f-405d-4128-8557-fee8827ddf90-tigera-ca-bundle\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543101 kubelet[2502]: I0913 00:07:01.539899 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-xtables-lock\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543101 kubelet[2502]: I0913 00:07:01.539921 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-lib-modules\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543101 kubelet[2502]: I0913 00:07:01.539946 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-cni-net-dir\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543101 kubelet[2502]: I0913 00:07:01.539970 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-var-run-calico\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543334 kubelet[2502]: I0913 00:07:01.539991 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-cni-bin-dir\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.543334 kubelet[2502]: I0913 00:07:01.540011 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ffa0c6f-405d-4128-8557-fee8827ddf90-flexvol-driver-host\") pod \"calico-node-tszgg\" (UID: \"5ffa0c6f-405d-4128-8557-fee8827ddf90\") " pod="calico-system/calico-node-tszgg" Sep 13 00:07:01.587370 containerd[1472]: time="2025-09-13T00:07:01.586852473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:01.587370 containerd[1472]: time="2025-09-13T00:07:01.587108250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:01.587370 containerd[1472]: time="2025-09-13T00:07:01.587129780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:01.587370 containerd[1472]: time="2025-09-13T00:07:01.587274729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:01.629994 systemd[1]: Started cri-containerd-36e812ac8c289de09d7791bae93da3faec88d72673877807fa2efc5a12d6f8ef.scope - libcontainer container 36e812ac8c289de09d7791bae93da3faec88d72673877807fa2efc5a12d6f8ef. Sep 13 00:07:01.649716 kubelet[2502]: E0913 00:07:01.649490 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.649716 kubelet[2502]: W0913 00:07:01.649524 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.652070 kubelet[2502]: E0913 00:07:01.651782 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.654455 kubelet[2502]: E0913 00:07:01.653772 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.654455 kubelet[2502]: W0913 00:07:01.653809 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.654455 kubelet[2502]: E0913 00:07:01.653862 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.656863 kubelet[2502]: E0913 00:07:01.656298 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.656863 kubelet[2502]: W0913 00:07:01.656854 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.657070 kubelet[2502]: E0913 00:07:01.656883 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.659219 kubelet[2502]: E0913 00:07:01.659177 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.659219 kubelet[2502]: W0913 00:07:01.659202 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.659420 kubelet[2502]: E0913 00:07:01.659396 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.668320 kubelet[2502]: E0913 00:07:01.668270 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.668320 kubelet[2502]: W0913 00:07:01.668303 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.668320 kubelet[2502]: E0913 00:07:01.668331 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.669838 kubelet[2502]: E0913 00:07:01.669811 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.669838 kubelet[2502]: W0913 00:07:01.669837 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.669939 kubelet[2502]: E0913 00:07:01.669875 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.672928 kubelet[2502]: E0913 00:07:01.672890 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.672928 kubelet[2502]: W0913 00:07:01.672916 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.672928 kubelet[2502]: E0913 00:07:01.672944 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.675222 kubelet[2502]: E0913 00:07:01.674635 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.675222 kubelet[2502]: W0913 00:07:01.675215 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.675443 kubelet[2502]: E0913 00:07:01.675245 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.682711 kubelet[2502]: E0913 00:07:01.682316 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.682711 kubelet[2502]: W0913 00:07:01.682348 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.682711 kubelet[2502]: E0913 00:07:01.682392 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.742977 containerd[1472]: time="2025-09-13T00:07:01.742850085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf7cc9b88-bfwcq,Uid:0caf781c-f956-4bd1-9df5-41359ca9833c,Namespace:calico-system,Attempt:0,} returns sandbox id \"36e812ac8c289de09d7791bae93da3faec88d72673877807fa2efc5a12d6f8ef\"" Sep 13 00:07:01.745475 kubelet[2502]: E0913 00:07:01.745269 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:01.748788 containerd[1472]: time="2025-09-13T00:07:01.748720725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:07:01.766809 kubelet[2502]: E0913 00:07:01.765233 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:01.815039 kubelet[2502]: E0913 00:07:01.814933 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.815039 kubelet[2502]: W0913 00:07:01.814963 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.815039 kubelet[2502]: E0913 00:07:01.814994 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.815573 kubelet[2502]: E0913 00:07:01.815253 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.815573 kubelet[2502]: W0913 00:07:01.815274 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.815573 kubelet[2502]: E0913 00:07:01.815291 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.816292 kubelet[2502]: E0913 00:07:01.816137 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.816292 kubelet[2502]: W0913 00:07:01.816168 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.816292 kubelet[2502]: E0913 00:07:01.816185 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.816615 kubelet[2502]: E0913 00:07:01.816490 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.816615 kubelet[2502]: W0913 00:07:01.816503 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.816615 kubelet[2502]: E0913 00:07:01.816514 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.817098 kubelet[2502]: E0913 00:07:01.817020 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.817098 kubelet[2502]: W0913 00:07:01.817039 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.817098 kubelet[2502]: E0913 00:07:01.817052 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.817593 kubelet[2502]: E0913 00:07:01.817556 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.817593 kubelet[2502]: W0913 00:07:01.817588 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.818085 kubelet[2502]: E0913 00:07:01.817604 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.818085 kubelet[2502]: E0913 00:07:01.817910 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.818085 kubelet[2502]: W0913 00:07:01.817923 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.818085 kubelet[2502]: E0913 00:07:01.817938 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.818594 kubelet[2502]: E0913 00:07:01.818560 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.818594 kubelet[2502]: W0913 00:07:01.818574 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.818594 kubelet[2502]: E0913 00:07:01.818597 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.819352 kubelet[2502]: E0913 00:07:01.819101 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.819352 kubelet[2502]: W0913 00:07:01.819112 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.819352 kubelet[2502]: E0913 00:07:01.819125 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.819519 kubelet[2502]: E0913 00:07:01.819363 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.819519 kubelet[2502]: W0913 00:07:01.819371 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.819519 kubelet[2502]: E0913 00:07:01.819380 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.821011 kubelet[2502]: E0913 00:07:01.820990 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.821011 kubelet[2502]: W0913 00:07:01.821010 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.821308 kubelet[2502]: E0913 00:07:01.821022 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.821308 kubelet[2502]: E0913 00:07:01.821245 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.821308 kubelet[2502]: W0913 00:07:01.821255 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.821308 kubelet[2502]: E0913 00:07:01.821277 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.821982 kubelet[2502]: E0913 00:07:01.821868 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.821982 kubelet[2502]: W0913 00:07:01.821882 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.821982 kubelet[2502]: E0913 00:07:01.821894 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.822823 kubelet[2502]: E0913 00:07:01.822326 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.822823 kubelet[2502]: W0913 00:07:01.822340 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.822823 kubelet[2502]: E0913 00:07:01.822351 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.822823 kubelet[2502]: E0913 00:07:01.822564 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.822823 kubelet[2502]: W0913 00:07:01.822576 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.822823 kubelet[2502]: E0913 00:07:01.822589 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.822823 kubelet[2502]: E0913 00:07:01.822800 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.822823 kubelet[2502]: W0913 00:07:01.822807 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.822823 kubelet[2502]: E0913 00:07:01.822816 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.824343 kubelet[2502]: E0913 00:07:01.823881 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.824343 kubelet[2502]: W0913 00:07:01.823903 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.824343 kubelet[2502]: E0913 00:07:01.823915 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.825287 kubelet[2502]: E0913 00:07:01.824514 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.825287 kubelet[2502]: W0913 00:07:01.825282 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.825287 kubelet[2502]: E0913 00:07:01.825300 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.826047 kubelet[2502]: E0913 00:07:01.826028 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.826047 kubelet[2502]: W0913 00:07:01.826042 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.826180 kubelet[2502]: E0913 00:07:01.826055 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.827021 kubelet[2502]: E0913 00:07:01.826999 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.827021 kubelet[2502]: W0913 00:07:01.827014 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.827159 kubelet[2502]: E0913 00:07:01.827027 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.830333 containerd[1472]: time="2025-09-13T00:07:01.830290887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tszgg,Uid:5ffa0c6f-405d-4128-8557-fee8827ddf90,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:01.849085 kubelet[2502]: E0913 00:07:01.848692 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.849085 kubelet[2502]: W0913 00:07:01.848729 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.849085 kubelet[2502]: E0913 00:07:01.848781 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.849085 kubelet[2502]: I0913 00:07:01.848870 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/44b8980a-9966-4148-82d4-7b2506ae2042-registration-dir\") pod \"csi-node-driver-8vq6x\" (UID: \"44b8980a-9966-4148-82d4-7b2506ae2042\") " pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:01.855458 kubelet[2502]: E0913 00:07:01.854780 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.855458 kubelet[2502]: W0913 00:07:01.854823 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.855458 kubelet[2502]: E0913 00:07:01.854862 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.855458 kubelet[2502]: I0913 00:07:01.854923 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/44b8980a-9966-4148-82d4-7b2506ae2042-socket-dir\") pod \"csi-node-driver-8vq6x\" (UID: \"44b8980a-9966-4148-82d4-7b2506ae2042\") " pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:01.855458 kubelet[2502]: E0913 00:07:01.855393 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.855458 kubelet[2502]: W0913 00:07:01.855414 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.855458 kubelet[2502]: E0913 00:07:01.855438 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.863239 kubelet[2502]: I0913 00:07:01.855619 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/44b8980a-9966-4148-82d4-7b2506ae2042-varrun\") pod \"csi-node-driver-8vq6x\" (UID: \"44b8980a-9966-4148-82d4-7b2506ae2042\") " pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:01.863239 kubelet[2502]: E0913 00:07:01.858959 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.863239 kubelet[2502]: W0913 00:07:01.858991 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.863239 kubelet[2502]: E0913 00:07:01.859027 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.863239 kubelet[2502]: E0913 00:07:01.859954 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.863239 kubelet[2502]: W0913 00:07:01.859974 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.863239 kubelet[2502]: E0913 00:07:01.859996 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.863239 kubelet[2502]: E0913 00:07:01.860272 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.863239 kubelet[2502]: W0913 00:07:01.860284 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.863637 kubelet[2502]: E0913 00:07:01.860298 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.863637 kubelet[2502]: I0913 00:07:01.860408 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44b8980a-9966-4148-82d4-7b2506ae2042-kubelet-dir\") pod \"csi-node-driver-8vq6x\" (UID: \"44b8980a-9966-4148-82d4-7b2506ae2042\") " pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:01.863637 kubelet[2502]: E0913 00:07:01.860622 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.863637 kubelet[2502]: W0913 00:07:01.860635 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.863637 kubelet[2502]: E0913 00:07:01.860651 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.863637 kubelet[2502]: E0913 00:07:01.860910 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.863637 kubelet[2502]: W0913 00:07:01.860922 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.863637 kubelet[2502]: E0913 00:07:01.860935 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.863637 kubelet[2502]: E0913 00:07:01.861960 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.865250 kubelet[2502]: W0913 00:07:01.861976 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.865250 kubelet[2502]: E0913 00:07:01.861993 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.865250 kubelet[2502]: I0913 00:07:01.862044 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhj8\" (UniqueName: \"kubernetes.io/projected/44b8980a-9966-4148-82d4-7b2506ae2042-kube-api-access-6jhj8\") pod \"csi-node-driver-8vq6x\" (UID: \"44b8980a-9966-4148-82d4-7b2506ae2042\") " pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:01.865250 kubelet[2502]: E0913 00:07:01.862351 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.865250 kubelet[2502]: W0913 00:07:01.862366 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.865250 kubelet[2502]: E0913 00:07:01.862380 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.865250 kubelet[2502]: E0913 00:07:01.862629 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.865250 kubelet[2502]: W0913 00:07:01.862640 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.865250 kubelet[2502]: E0913 00:07:01.862653 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863005 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.865642 kubelet[2502]: W0913 00:07:01.863018 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863032 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863270 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.865642 kubelet[2502]: W0913 00:07:01.863282 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863296 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863541 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.865642 kubelet[2502]: W0913 00:07:01.863552 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863566 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.865642 kubelet[2502]: E0913 00:07:01.863951 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.870221 kubelet[2502]: W0913 00:07:01.863964 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.870221 kubelet[2502]: E0913 00:07:01.863982 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.955367 containerd[1472]: time="2025-09-13T00:07:01.955210694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:01.957467 containerd[1472]: time="2025-09-13T00:07:01.955345704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:01.957467 containerd[1472]: time="2025-09-13T00:07:01.955372983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:01.957467 containerd[1472]: time="2025-09-13T00:07:01.955520313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:01.966426 kubelet[2502]: E0913 00:07:01.966331 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.966426 kubelet[2502]: W0913 00:07:01.966365 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.966426 kubelet[2502]: E0913 00:07:01.966406 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.967835 kubelet[2502]: E0913 00:07:01.967804 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.967835 kubelet[2502]: W0913 00:07:01.967833 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.968099 kubelet[2502]: E0913 00:07:01.967856 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.968333 kubelet[2502]: E0913 00:07:01.968197 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.968333 kubelet[2502]: W0913 00:07:01.968211 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.968333 kubelet[2502]: E0913 00:07:01.968227 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.968653 kubelet[2502]: E0913 00:07:01.968558 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.968653 kubelet[2502]: W0913 00:07:01.968576 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.968653 kubelet[2502]: E0913 00:07:01.968592 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.969033 kubelet[2502]: E0913 00:07:01.968918 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.969033 kubelet[2502]: W0913 00:07:01.968931 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.969033 kubelet[2502]: E0913 00:07:01.968946 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.970029 kubelet[2502]: E0913 00:07:01.969999 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.970029 kubelet[2502]: W0913 00:07:01.970021 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.970169 kubelet[2502]: E0913 00:07:01.970038 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.972027 kubelet[2502]: E0913 00:07:01.971991 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.972027 kubelet[2502]: W0913 00:07:01.972011 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.972027 kubelet[2502]: E0913 00:07:01.972029 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.972566 kubelet[2502]: E0913 00:07:01.972544 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.972566 kubelet[2502]: W0913 00:07:01.972565 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.972919 kubelet[2502]: E0913 00:07:01.972582 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.973193 kubelet[2502]: E0913 00:07:01.973173 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.973193 kubelet[2502]: W0913 00:07:01.973192 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.973315 kubelet[2502]: E0913 00:07:01.973209 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.975631 kubelet[2502]: E0913 00:07:01.975595 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.975631 kubelet[2502]: W0913 00:07:01.975619 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.975928 kubelet[2502]: E0913 00:07:01.975643 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.976964 kubelet[2502]: E0913 00:07:01.976000 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.976964 kubelet[2502]: W0913 00:07:01.976015 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.976964 kubelet[2502]: E0913 00:07:01.976033 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.977853 kubelet[2502]: E0913 00:07:01.977824 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.977853 kubelet[2502]: W0913 00:07:01.977844 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.978005 kubelet[2502]: E0913 00:07:01.977870 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.978276 kubelet[2502]: E0913 00:07:01.978258 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.978276 kubelet[2502]: W0913 00:07:01.978276 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.978390 kubelet[2502]: E0913 00:07:01.978292 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.978992 kubelet[2502]: E0913 00:07:01.978971 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.978992 kubelet[2502]: W0913 00:07:01.978991 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.979136 kubelet[2502]: E0913 00:07:01.979008 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.979757 kubelet[2502]: E0913 00:07:01.979733 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.979757 kubelet[2502]: W0913 00:07:01.979751 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.980044 kubelet[2502]: E0913 00:07:01.979768 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.980979 kubelet[2502]: E0913 00:07:01.980952 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.980979 kubelet[2502]: W0913 00:07:01.980973 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.981181 kubelet[2502]: E0913 00:07:01.980991 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.981665 kubelet[2502]: E0913 00:07:01.981641 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.981820 kubelet[2502]: W0913 00:07:01.981669 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.981820 kubelet[2502]: E0913 00:07:01.981712 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.982912 kubelet[2502]: E0913 00:07:01.982887 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.982912 kubelet[2502]: W0913 00:07:01.982909 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.983305 kubelet[2502]: E0913 00:07:01.983137 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.984250 kubelet[2502]: E0913 00:07:01.984222 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.984250 kubelet[2502]: W0913 00:07:01.984248 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.984448 kubelet[2502]: E0913 00:07:01.984267 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.985834 kubelet[2502]: E0913 00:07:01.985607 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.985834 kubelet[2502]: W0913 00:07:01.985642 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.985834 kubelet[2502]: E0913 00:07:01.985661 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.986560 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.990583 kubelet[2502]: W0913 00:07:01.986576 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.986594 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.988762 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.990583 kubelet[2502]: W0913 00:07:01.988784 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.988805 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.989059 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.990583 kubelet[2502]: W0913 00:07:01.989072 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.989088 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.990583 kubelet[2502]: E0913 00:07:01.990550 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.991037 kubelet[2502]: W0913 00:07:01.990567 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.991037 kubelet[2502]: E0913 00:07:01.990586 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:01.992917 kubelet[2502]: E0913 00:07:01.992277 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:01.992917 kubelet[2502]: W0913 00:07:01.992300 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:01.992917 kubelet[2502]: E0913 00:07:01.992357 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:02.030457 kubelet[2502]: E0913 00:07:02.028537 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:02.030457 kubelet[2502]: W0913 00:07:02.028571 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:02.030457 kubelet[2502]: E0913 00:07:02.028620 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:02.031980 systemd[1]: Started cri-containerd-238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015.scope - libcontainer container 238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015. Sep 13 00:07:02.101636 containerd[1472]: time="2025-09-13T00:07:02.101423942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tszgg,Uid:5ffa0c6f-405d-4128-8557-fee8827ddf90,Namespace:calico-system,Attempt:0,} returns sandbox id \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\"" Sep 13 00:07:03.273963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838902417.mount: Deactivated successfully. Sep 13 00:07:04.013432 kubelet[2502]: E0913 00:07:04.013354 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:04.309532 containerd[1472]: time="2025-09-13T00:07:04.308768383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:04.310746 containerd[1472]: time="2025-09-13T00:07:04.310655097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:07:04.311428 containerd[1472]: time="2025-09-13T00:07:04.311373234Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:04.314757 containerd[1472]: time="2025-09-13T00:07:04.314072336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:04.314922 containerd[1472]: time="2025-09-13T00:07:04.314781475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.56596125s" Sep 13 00:07:04.314922 containerd[1472]: time="2025-09-13T00:07:04.314821952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:07:04.317219 containerd[1472]: time="2025-09-13T00:07:04.316907409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:07:04.348248 containerd[1472]: time="2025-09-13T00:07:04.348209477Z" level=info msg="CreateContainer within sandbox \"36e812ac8c289de09d7791bae93da3faec88d72673877807fa2efc5a12d6f8ef\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:07:04.358883 containerd[1472]: time="2025-09-13T00:07:04.358838054Z" level=info msg="CreateContainer within sandbox \"36e812ac8c289de09d7791bae93da3faec88d72673877807fa2efc5a12d6f8ef\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9eaa0f2aeeedb09612a4f7e37dd7a5dee09a48df5c110a52ad301a09aa475dc9\"" Sep 13 00:07:04.371062 containerd[1472]: time="2025-09-13T00:07:04.371016533Z" level=info msg="StartContainer for \"9eaa0f2aeeedb09612a4f7e37dd7a5dee09a48df5c110a52ad301a09aa475dc9\"" Sep 13 00:07:04.436197 systemd[1]: Started cri-containerd-9eaa0f2aeeedb09612a4f7e37dd7a5dee09a48df5c110a52ad301a09aa475dc9.scope - libcontainer container 9eaa0f2aeeedb09612a4f7e37dd7a5dee09a48df5c110a52ad301a09aa475dc9. Sep 13 00:07:04.503105 containerd[1472]: time="2025-09-13T00:07:04.503064833Z" level=info msg="StartContainer for \"9eaa0f2aeeedb09612a4f7e37dd7a5dee09a48df5c110a52ad301a09aa475dc9\" returns successfully" Sep 13 00:07:05.120170 kubelet[2502]: E0913 00:07:05.119892 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:05.151786 kubelet[2502]: E0913 00:07:05.151745 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.151786 kubelet[2502]: W0913 00:07:05.151780 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.152321 kubelet[2502]: E0913 00:07:05.151806 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.152321 kubelet[2502]: E0913 00:07:05.152083 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.152321 kubelet[2502]: W0913 00:07:05.152093 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.152321 kubelet[2502]: E0913 00:07:05.152105 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.152642 kubelet[2502]: E0913 00:07:05.152431 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.152642 kubelet[2502]: W0913 00:07:05.152443 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.152642 kubelet[2502]: E0913 00:07:05.152456 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.152800 kubelet[2502]: E0913 00:07:05.152708 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.152800 kubelet[2502]: W0913 00:07:05.152717 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.152800 kubelet[2502]: E0913 00:07:05.152726 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.152906 kubelet[2502]: E0913 00:07:05.152893 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.152906 kubelet[2502]: W0913 00:07:05.152904 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.153018 kubelet[2502]: E0913 00:07:05.152912 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.153095 kubelet[2502]: E0913 00:07:05.153075 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.153095 kubelet[2502]: W0913 00:07:05.153084 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.153388 kubelet[2502]: E0913 00:07:05.153095 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.153388 kubelet[2502]: E0913 00:07:05.153365 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.153388 kubelet[2502]: W0913 00:07:05.153376 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.153388 kubelet[2502]: E0913 00:07:05.153386 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.153649 kubelet[2502]: E0913 00:07:05.153563 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.153649 kubelet[2502]: W0913 00:07:05.153570 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.153649 kubelet[2502]: E0913 00:07:05.153578 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.153982 kubelet[2502]: E0913 00:07:05.153856 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.153982 kubelet[2502]: W0913 00:07:05.153864 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.153982 kubelet[2502]: E0913 00:07:05.153872 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.154339 kubelet[2502]: E0913 00:07:05.154077 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.154339 kubelet[2502]: W0913 00:07:05.154086 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.154339 kubelet[2502]: E0913 00:07:05.154096 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.154339 kubelet[2502]: E0913 00:07:05.154336 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.154339 kubelet[2502]: W0913 00:07:05.154343 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.154862 kubelet[2502]: E0913 00:07:05.154350 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.154862 kubelet[2502]: E0913 00:07:05.154511 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.154862 kubelet[2502]: W0913 00:07:05.154517 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.154862 kubelet[2502]: E0913 00:07:05.154525 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.154862 kubelet[2502]: E0913 00:07:05.154811 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.154862 kubelet[2502]: W0913 00:07:05.154818 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.154862 kubelet[2502]: E0913 00:07:05.154825 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.155061 kubelet[2502]: E0913 00:07:05.155047 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.155091 kubelet[2502]: W0913 00:07:05.155060 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.155091 kubelet[2502]: E0913 00:07:05.155070 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.155313 kubelet[2502]: E0913 00:07:05.155302 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.155313 kubelet[2502]: W0913 00:07:05.155312 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.155384 kubelet[2502]: E0913 00:07:05.155320 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.210150 kubelet[2502]: E0913 00:07:05.210111 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.210150 kubelet[2502]: W0913 00:07:05.210140 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.210399 kubelet[2502]: E0913 00:07:05.210167 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.210442 kubelet[2502]: E0913 00:07:05.210421 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.210493 kubelet[2502]: W0913 00:07:05.210443 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.210493 kubelet[2502]: E0913 00:07:05.210460 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.210760 kubelet[2502]: E0913 00:07:05.210737 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.210809 kubelet[2502]: W0913 00:07:05.210761 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.210809 kubelet[2502]: E0913 00:07:05.210775 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.211102 kubelet[2502]: E0913 00:07:05.211089 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.211102 kubelet[2502]: W0913 00:07:05.211101 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.211196 kubelet[2502]: E0913 00:07:05.211113 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.211337 kubelet[2502]: E0913 00:07:05.211325 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.211408 kubelet[2502]: W0913 00:07:05.211339 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.211443 kubelet[2502]: E0913 00:07:05.211416 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.211706 kubelet[2502]: E0913 00:07:05.211694 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.211706 kubelet[2502]: W0913 00:07:05.211705 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.211815 kubelet[2502]: E0913 00:07:05.211716 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.211998 kubelet[2502]: E0913 00:07:05.211983 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.211998 kubelet[2502]: W0913 00:07:05.211996 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.212130 kubelet[2502]: E0913 00:07:05.212007 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.212272 kubelet[2502]: E0913 00:07:05.212260 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.212310 kubelet[2502]: W0913 00:07:05.212272 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.212310 kubelet[2502]: E0913 00:07:05.212305 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.212593 kubelet[2502]: E0913 00:07:05.212575 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.212593 kubelet[2502]: W0913 00:07:05.212587 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.212719 kubelet[2502]: E0913 00:07:05.212598 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.213200 kubelet[2502]: E0913 00:07:05.213037 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.213200 kubelet[2502]: W0913 00:07:05.213051 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.213200 kubelet[2502]: E0913 00:07:05.213063 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.213483 kubelet[2502]: E0913 00:07:05.213362 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.213483 kubelet[2502]: W0913 00:07:05.213373 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.213483 kubelet[2502]: E0913 00:07:05.213384 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.214181 kubelet[2502]: E0913 00:07:05.213802 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.214181 kubelet[2502]: W0913 00:07:05.213819 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.214181 kubelet[2502]: E0913 00:07:05.213834 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.214875 kubelet[2502]: E0913 00:07:05.214668 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.214875 kubelet[2502]: W0913 00:07:05.214692 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.214875 kubelet[2502]: E0913 00:07:05.214704 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.215198 kubelet[2502]: E0913 00:07:05.215086 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.215198 kubelet[2502]: W0913 00:07:05.215098 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.215198 kubelet[2502]: E0913 00:07:05.215109 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.215556 kubelet[2502]: E0913 00:07:05.215468 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.215556 kubelet[2502]: W0913 00:07:05.215480 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.215556 kubelet[2502]: E0913 00:07:05.215491 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.216197 kubelet[2502]: E0913 00:07:05.215973 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.216197 kubelet[2502]: W0913 00:07:05.216008 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.216197 kubelet[2502]: E0913 00:07:05.216023 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.217104 kubelet[2502]: E0913 00:07:05.216631 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.217104 kubelet[2502]: W0913 00:07:05.216650 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.217104 kubelet[2502]: E0913 00:07:05.216666 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.217487 kubelet[2502]: E0913 00:07:05.217471 2502 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:07:05.217958 kubelet[2502]: W0913 00:07:05.217905 2502 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:07:05.217958 kubelet[2502]: E0913 00:07:05.217927 2502 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:07:05.760126 containerd[1472]: time="2025-09-13T00:07:05.760068572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:05.761188 containerd[1472]: time="2025-09-13T00:07:05.761141805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:07:05.761799 containerd[1472]: time="2025-09-13T00:07:05.761767961Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:05.774241 containerd[1472]: time="2025-09-13T00:07:05.773920875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:05.775070 containerd[1472]: time="2025-09-13T00:07:05.774992516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.458051562s" Sep 13 00:07:05.775070 containerd[1472]: time="2025-09-13T00:07:05.775051787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:07:05.779754 containerd[1472]: time="2025-09-13T00:07:05.779711418Z" level=info msg="CreateContainer within sandbox \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:07:05.804659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630817260.mount: Deactivated successfully. Sep 13 00:07:05.810052 containerd[1472]: time="2025-09-13T00:07:05.810013752Z" level=info msg="CreateContainer within sandbox \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db\"" Sep 13 00:07:05.810773 containerd[1472]: time="2025-09-13T00:07:05.810737357Z" level=info msg="StartContainer for \"045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db\"" Sep 13 00:07:05.858913 systemd[1]: Started cri-containerd-045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db.scope - libcontainer container 045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db. Sep 13 00:07:05.892879 containerd[1472]: time="2025-09-13T00:07:05.892743807Z" level=info msg="StartContainer for \"045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db\" returns successfully" Sep 13 00:07:05.907446 systemd[1]: cri-containerd-045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db.scope: Deactivated successfully. Sep 13 00:07:05.977967 containerd[1472]: time="2025-09-13T00:07:05.954111780Z" level=info msg="shim disconnected" id=045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db namespace=k8s.io Sep 13 00:07:05.978183 containerd[1472]: time="2025-09-13T00:07:05.977977623Z" level=warning msg="cleaning up after shim disconnected" id=045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db namespace=k8s.io Sep 13 00:07:05.978183 containerd[1472]: time="2025-09-13T00:07:05.978000683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:06.013856 kubelet[2502]: E0913 00:07:06.013720 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:06.123119 kubelet[2502]: I0913 00:07:06.123080 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:06.123675 kubelet[2502]: E0913 00:07:06.123537 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:06.125562 containerd[1472]: time="2025-09-13T00:07:06.125520533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:07:06.155775 kubelet[2502]: I0913 00:07:06.154123 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf7cc9b88-bfwcq" podStartSLOduration=2.58492724 podStartE2EDuration="5.154101766s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="2025-09-13 00:07:01.7472931 +0000 UTC m=+20.887290824" lastFinishedPulling="2025-09-13 00:07:04.316467556 +0000 UTC m=+23.456465350" observedRunningTime="2025-09-13 00:07:05.140594097 +0000 UTC m=+24.280591842" watchObservedRunningTime="2025-09-13 00:07:06.154101766 +0000 UTC m=+25.294099510" Sep 13 00:07:06.334944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-045f75be598f42680a2a5ba93d6d20e3a75ad4cb5389b5c06cfdd1079ea130db-rootfs.mount: Deactivated successfully. Sep 13 00:07:08.012979 kubelet[2502]: E0913 00:07:08.012895 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:09.795436 containerd[1472]: time="2025-09-13T00:07:09.794548839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:09.795436 containerd[1472]: time="2025-09-13T00:07:09.795172249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:07:09.795436 containerd[1472]: time="2025-09-13T00:07:09.795347944Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:09.797560 containerd[1472]: time="2025-09-13T00:07:09.797532450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:09.798627 containerd[1472]: time="2025-09-13T00:07:09.798548430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.672982639s" Sep 13 00:07:09.798770 containerd[1472]: time="2025-09-13T00:07:09.798751184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:07:09.803538 containerd[1472]: time="2025-09-13T00:07:09.803487697Z" level=info msg="CreateContainer within sandbox \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:07:09.818528 containerd[1472]: time="2025-09-13T00:07:09.818470933Z" level=info msg="CreateContainer within sandbox \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7\"" Sep 13 00:07:09.819618 containerd[1472]: time="2025-09-13T00:07:09.819579851Z" level=info msg="StartContainer for \"4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7\"" Sep 13 00:07:09.919670 systemd[1]: run-containerd-runc-k8s.io-4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7-runc.r99V6g.mount: Deactivated successfully. Sep 13 00:07:09.927947 systemd[1]: Started cri-containerd-4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7.scope - libcontainer container 4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7. Sep 13 00:07:09.974986 containerd[1472]: time="2025-09-13T00:07:09.974921817Z" level=info msg="StartContainer for \"4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7\" returns successfully" Sep 13 00:07:10.014136 kubelet[2502]: E0913 00:07:10.013613 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:10.592833 systemd[1]: cri-containerd-4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7.scope: Deactivated successfully. Sep 13 00:07:10.624638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7-rootfs.mount: Deactivated successfully. Sep 13 00:07:10.627444 containerd[1472]: time="2025-09-13T00:07:10.627199127Z" level=info msg="shim disconnected" id=4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7 namespace=k8s.io Sep 13 00:07:10.627444 containerd[1472]: time="2025-09-13T00:07:10.627271367Z" level=warning msg="cleaning up after shim disconnected" id=4eec2ab4f07980005255d4850f7a0a53679fd61a0cd3d4598bc968f96161b4c7 namespace=k8s.io Sep 13 00:07:10.627444 containerd[1472]: time="2025-09-13T00:07:10.627280460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:07:10.678651 kubelet[2502]: I0913 00:07:10.678618 2502 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:07:10.733030 systemd[1]: Created slice kubepods-burstable-podb7cad2b0_ab12_4053_971a_70254e4b0fc9.slice - libcontainer container kubepods-burstable-podb7cad2b0_ab12_4053_971a_70254e4b0fc9.slice. Sep 13 00:07:10.754716 systemd[1]: Created slice kubepods-besteffort-pod7ea70e26_63af_4814_b06f_8478ac02d9b8.slice - libcontainer container kubepods-besteffort-pod7ea70e26_63af_4814_b06f_8478ac02d9b8.slice. Sep 13 00:07:10.776795 systemd[1]: Created slice kubepods-burstable-pod9b3b4717_bce4_4f0b_a8c4_81a0865218a2.slice - libcontainer container kubepods-burstable-pod9b3b4717_bce4_4f0b_a8c4_81a0865218a2.slice. Sep 13 00:07:10.791668 systemd[1]: Created slice kubepods-besteffort-pod86461376_5af6_4498_96f8_f26687392906.slice - libcontainer container kubepods-besteffort-pod86461376_5af6_4498_96f8_f26687392906.slice. Sep 13 00:07:10.810145 systemd[1]: Created slice kubepods-besteffort-pod557dbfe6_fe21_439e_9b11_49c1ef45886a.slice - libcontainer container kubepods-besteffort-pod557dbfe6_fe21_439e_9b11_49c1ef45886a.slice. Sep 13 00:07:10.818062 systemd[1]: Created slice kubepods-besteffort-podb0ed41cd_297f_49fe_95fb_ff8bf9f215d9.slice - libcontainer container kubepods-besteffort-podb0ed41cd_297f_49fe_95fb_ff8bf9f215d9.slice. Sep 13 00:07:10.830652 systemd[1]: Created slice kubepods-besteffort-pod95fc76f1_d3f0_4dcc_90ad_49b99c0129ac.slice - libcontainer container kubepods-besteffort-pod95fc76f1_d3f0_4dcc_90ad_49b99c0129ac.slice. Sep 13 00:07:10.853143 kubelet[2502]: I0913 00:07:10.852848 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b3b4717-bce4-4f0b-a8c4-81a0865218a2-config-volume\") pod \"coredns-674b8bbfcf-zf5bf\" (UID: \"9b3b4717-bce4-4f0b-a8c4-81a0865218a2\") " pod="kube-system/coredns-674b8bbfcf-zf5bf" Sep 13 00:07:10.854267 kubelet[2502]: I0913 00:07:10.854230 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-ca-bundle\") pod \"whisker-d4c55f4b-h9l8z\" (UID: \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\") " pod="calico-system/whisker-d4c55f4b-h9l8z" Sep 13 00:07:10.854633 kubelet[2502]: I0913 00:07:10.854611 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxbx\" (UniqueName: \"kubernetes.io/projected/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-kube-api-access-wzxbx\") pod \"whisker-d4c55f4b-h9l8z\" (UID: \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\") " pod="calico-system/whisker-d4c55f4b-h9l8z" Sep 13 00:07:10.854920 kubelet[2502]: I0913 00:07:10.854871 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0ed41cd-297f-49fe-95fb-ff8bf9f215d9-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-rm7dn\" (UID: \"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9\") " pod="calico-system/goldmane-54d579b49d-rm7dn" Sep 13 00:07:10.855117 kubelet[2502]: I0913 00:07:10.855100 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7cad2b0-ab12-4053-971a-70254e4b0fc9-config-volume\") pod \"coredns-674b8bbfcf-qmk6c\" (UID: \"b7cad2b0-ab12-4053-971a-70254e4b0fc9\") " pod="kube-system/coredns-674b8bbfcf-qmk6c" Sep 13 00:07:10.855352 kubelet[2502]: I0913 00:07:10.855324 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0ed41cd-297f-49fe-95fb-ff8bf9f215d9-config\") pod \"goldmane-54d579b49d-rm7dn\" (UID: \"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9\") " pod="calico-system/goldmane-54d579b49d-rm7dn" Sep 13 00:07:10.855566 kubelet[2502]: I0913 00:07:10.855551 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b0ed41cd-297f-49fe-95fb-ff8bf9f215d9-goldmane-key-pair\") pod \"goldmane-54d579b49d-rm7dn\" (UID: \"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9\") " pod="calico-system/goldmane-54d579b49d-rm7dn" Sep 13 00:07:10.855733 kubelet[2502]: I0913 00:07:10.855622 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mg6x\" (UniqueName: \"kubernetes.io/projected/b0ed41cd-297f-49fe-95fb-ff8bf9f215d9-kube-api-access-8mg6x\") pod \"goldmane-54d579b49d-rm7dn\" (UID: \"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9\") " pod="calico-system/goldmane-54d579b49d-rm7dn" Sep 13 00:07:10.855733 kubelet[2502]: I0913 00:07:10.855651 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mzw\" (UniqueName: \"kubernetes.io/projected/9b3b4717-bce4-4f0b-a8c4-81a0865218a2-kube-api-access-w9mzw\") pod \"coredns-674b8bbfcf-zf5bf\" (UID: \"9b3b4717-bce4-4f0b-a8c4-81a0865218a2\") " pod="kube-system/coredns-674b8bbfcf-zf5bf" Sep 13 00:07:10.856100 kubelet[2502]: I0913 00:07:10.856048 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkxs\" (UniqueName: \"kubernetes.io/projected/b7cad2b0-ab12-4053-971a-70254e4b0fc9-kube-api-access-phkxs\") pod \"coredns-674b8bbfcf-qmk6c\" (UID: \"b7cad2b0-ab12-4053-971a-70254e4b0fc9\") " pod="kube-system/coredns-674b8bbfcf-qmk6c" Sep 13 00:07:10.857744 kubelet[2502]: I0913 00:07:10.856250 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9dk\" (UniqueName: \"kubernetes.io/projected/7ea70e26-63af-4814-b06f-8478ac02d9b8-kube-api-access-7z9dk\") pod \"calico-apiserver-ccb5685db-9nxhh\" (UID: \"7ea70e26-63af-4814-b06f-8478ac02d9b8\") " pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" Sep 13 00:07:10.857744 kubelet[2502]: I0913 00:07:10.856279 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86461376-5af6-4498-96f8-f26687392906-tigera-ca-bundle\") pod \"calico-kube-controllers-767f8569c7-m5z44\" (UID: \"86461376-5af6-4498-96f8-f26687392906\") " pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" Sep 13 00:07:10.857744 kubelet[2502]: I0913 00:07:10.856299 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/557dbfe6-fe21-439e-9b11-49c1ef45886a-calico-apiserver-certs\") pod \"calico-apiserver-ccb5685db-bjcrf\" (UID: \"557dbfe6-fe21-439e-9b11-49c1ef45886a\") " pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" Sep 13 00:07:10.857744 kubelet[2502]: I0913 00:07:10.856323 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-backend-key-pair\") pod \"whisker-d4c55f4b-h9l8z\" (UID: \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\") " pod="calico-system/whisker-d4c55f4b-h9l8z" Sep 13 00:07:10.857744 kubelet[2502]: I0913 00:07:10.856362 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ea70e26-63af-4814-b06f-8478ac02d9b8-calico-apiserver-certs\") pod \"calico-apiserver-ccb5685db-9nxhh\" (UID: \"7ea70e26-63af-4814-b06f-8478ac02d9b8\") " pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" Sep 13 00:07:10.857980 kubelet[2502]: I0913 00:07:10.856385 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nncdc\" (UniqueName: \"kubernetes.io/projected/86461376-5af6-4498-96f8-f26687392906-kube-api-access-nncdc\") pod \"calico-kube-controllers-767f8569c7-m5z44\" (UID: \"86461376-5af6-4498-96f8-f26687392906\") " pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" Sep 13 00:07:10.857980 kubelet[2502]: I0913 00:07:10.856414 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs4h7\" (UniqueName: \"kubernetes.io/projected/557dbfe6-fe21-439e-9b11-49c1ef45886a-kube-api-access-rs4h7\") pod \"calico-apiserver-ccb5685db-bjcrf\" (UID: \"557dbfe6-fe21-439e-9b11-49c1ef45886a\") " pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" Sep 13 00:07:11.041673 kubelet[2502]: E0913 00:07:11.041019 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:11.042818 containerd[1472]: time="2025-09-13T00:07:11.041926486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qmk6c,Uid:b7cad2b0-ab12-4053-971a-70254e4b0fc9,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:11.086803 kubelet[2502]: E0913 00:07:11.086442 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:11.089597 containerd[1472]: time="2025-09-13T00:07:11.089129408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-9nxhh,Uid:7ea70e26-63af-4814-b06f-8478ac02d9b8,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:07:11.095566 containerd[1472]: time="2025-09-13T00:07:11.089470417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zf5bf,Uid:9b3b4717-bce4-4f0b-a8c4-81a0865218a2,Namespace:kube-system,Attempt:0,}" Sep 13 00:07:11.104110 containerd[1472]: time="2025-09-13T00:07:11.103367641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-767f8569c7-m5z44,Uid:86461376-5af6-4498-96f8-f26687392906,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:11.117222 containerd[1472]: time="2025-09-13T00:07:11.116870041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-bjcrf,Uid:557dbfe6-fe21-439e-9b11-49c1ef45886a,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:07:11.131170 containerd[1472]: time="2025-09-13T00:07:11.131120469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rm7dn,Uid:b0ed41cd-297f-49fe-95fb-ff8bf9f215d9,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:11.137089 containerd[1472]: time="2025-09-13T00:07:11.137028569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4c55f4b-h9l8z,Uid:95fc76f1-d3f0-4dcc-90ad-49b99c0129ac,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:11.161109 containerd[1472]: time="2025-09-13T00:07:11.161019624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:07:11.493804 containerd[1472]: time="2025-09-13T00:07:11.493555876Z" level=error msg="Failed to destroy network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.494727 containerd[1472]: time="2025-09-13T00:07:11.494115886Z" level=error msg="encountered an error cleaning up failed sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.494727 containerd[1472]: time="2025-09-13T00:07:11.494619275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-bjcrf,Uid:557dbfe6-fe21-439e-9b11-49c1ef45886a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.504031 containerd[1472]: time="2025-09-13T00:07:11.503744573Z" level=error msg="Failed to destroy network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.504354 kubelet[2502]: E0913 00:07:11.504312 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.504439 kubelet[2502]: E0913 00:07:11.504388 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" Sep 13 00:07:11.504439 kubelet[2502]: E0913 00:07:11.504416 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" Sep 13 00:07:11.504548 kubelet[2502]: E0913 00:07:11.504494 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccb5685db-bjcrf_calico-apiserver(557dbfe6-fe21-439e-9b11-49c1ef45886a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccb5685db-bjcrf_calico-apiserver(557dbfe6-fe21-439e-9b11-49c1ef45886a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" podUID="557dbfe6-fe21-439e-9b11-49c1ef45886a" Sep 13 00:07:11.509029 containerd[1472]: time="2025-09-13T00:07:11.507964980Z" level=error msg="encountered an error cleaning up failed sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.509029 containerd[1472]: time="2025-09-13T00:07:11.508041557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-9nxhh,Uid:7ea70e26-63af-4814-b06f-8478ac02d9b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.509259 kubelet[2502]: E0913 00:07:11.508445 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.509259 kubelet[2502]: E0913 00:07:11.508531 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" Sep 13 00:07:11.509259 kubelet[2502]: E0913 00:07:11.508565 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" Sep 13 00:07:11.512361 kubelet[2502]: E0913 00:07:11.508817 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccb5685db-9nxhh_calico-apiserver(7ea70e26-63af-4814-b06f-8478ac02d9b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccb5685db-9nxhh_calico-apiserver(7ea70e26-63af-4814-b06f-8478ac02d9b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" podUID="7ea70e26-63af-4814-b06f-8478ac02d9b8" Sep 13 00:07:11.520151 containerd[1472]: time="2025-09-13T00:07:11.520107827Z" level=error msg="Failed to destroy network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.521383 containerd[1472]: time="2025-09-13T00:07:11.521037140Z" level=error msg="Failed to destroy network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.522160 containerd[1472]: time="2025-09-13T00:07:11.522115269Z" level=error msg="encountered an error cleaning up failed sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.523392 containerd[1472]: time="2025-09-13T00:07:11.522569869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qmk6c,Uid:b7cad2b0-ab12-4053-971a-70254e4b0fc9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.523769 containerd[1472]: time="2025-09-13T00:07:11.523671620Z" level=error msg="Failed to destroy network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.524037 containerd[1472]: time="2025-09-13T00:07:11.524012842Z" level=error msg="encountered an error cleaning up failed sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.524800 kubelet[2502]: E0913 00:07:11.524214 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.524800 kubelet[2502]: E0913 00:07:11.524280 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qmk6c" Sep 13 00:07:11.524800 kubelet[2502]: E0913 00:07:11.524302 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qmk6c" Sep 13 00:07:11.524994 kubelet[2502]: E0913 00:07:11.524395 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qmk6c_kube-system(b7cad2b0-ab12-4053-971a-70254e4b0fc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qmk6c_kube-system(b7cad2b0-ab12-4053-971a-70254e4b0fc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qmk6c" podUID="b7cad2b0-ab12-4053-971a-70254e4b0fc9" Sep 13 00:07:11.526392 containerd[1472]: time="2025-09-13T00:07:11.525940916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zf5bf,Uid:9b3b4717-bce4-4f0b-a8c4-81a0865218a2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.526635 containerd[1472]: time="2025-09-13T00:07:11.524047009Z" level=error msg="encountered an error cleaning up failed sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.526635 containerd[1472]: time="2025-09-13T00:07:11.526568976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rm7dn,Uid:b0ed41cd-297f-49fe-95fb-ff8bf9f215d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.527052 kubelet[2502]: E0913 00:07:11.526858 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.527052 kubelet[2502]: E0913 00:07:11.526919 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rm7dn" Sep 13 00:07:11.527052 kubelet[2502]: E0913 00:07:11.526942 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-rm7dn" Sep 13 00:07:11.527342 kubelet[2502]: E0913 00:07:11.527000 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-rm7dn_calico-system(b0ed41cd-297f-49fe-95fb-ff8bf9f215d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-rm7dn_calico-system(b0ed41cd-297f-49fe-95fb-ff8bf9f215d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-rm7dn" podUID="b0ed41cd-297f-49fe-95fb-ff8bf9f215d9" Sep 13 00:07:11.527342 kubelet[2502]: E0913 00:07:11.527044 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.527342 kubelet[2502]: E0913 00:07:11.527061 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zf5bf" Sep 13 00:07:11.527541 kubelet[2502]: E0913 00:07:11.527072 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zf5bf" Sep 13 00:07:11.527541 kubelet[2502]: E0913 00:07:11.527095 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zf5bf_kube-system(9b3b4717-bce4-4f0b-a8c4-81a0865218a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zf5bf_kube-system(9b3b4717-bce4-4f0b-a8c4-81a0865218a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zf5bf" podUID="9b3b4717-bce4-4f0b-a8c4-81a0865218a2" Sep 13 00:07:11.540711 containerd[1472]: time="2025-09-13T00:07:11.540623140Z" level=error msg="Failed to destroy network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.541123 containerd[1472]: time="2025-09-13T00:07:11.541086307Z" level=error msg="encountered an error cleaning up failed sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.541780 containerd[1472]: time="2025-09-13T00:07:11.541729763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-767f8569c7-m5z44,Uid:86461376-5af6-4498-96f8-f26687392906,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.542130 kubelet[2502]: E0913 00:07:11.542077 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.542198 kubelet[2502]: E0913 00:07:11.542167 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" Sep 13 00:07:11.542238 kubelet[2502]: E0913 00:07:11.542199 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" Sep 13 00:07:11.542328 kubelet[2502]: E0913 00:07:11.542280 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-767f8569c7-m5z44_calico-system(86461376-5af6-4498-96f8-f26687392906)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-767f8569c7-m5z44_calico-system(86461376-5af6-4498-96f8-f26687392906)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" podUID="86461376-5af6-4498-96f8-f26687392906" Sep 13 00:07:11.558252 containerd[1472]: time="2025-09-13T00:07:11.557732898Z" level=error msg="Failed to destroy network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.558252 containerd[1472]: time="2025-09-13T00:07:11.558087824Z" level=error msg="encountered an error cleaning up failed sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.558252 containerd[1472]: time="2025-09-13T00:07:11.558145548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d4c55f4b-h9l8z,Uid:95fc76f1-d3f0-4dcc-90ad-49b99c0129ac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.558485 kubelet[2502]: E0913 00:07:11.558425 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:11.558578 kubelet[2502]: E0913 00:07:11.558504 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4c55f4b-h9l8z" Sep 13 00:07:11.558578 kubelet[2502]: E0913 00:07:11.558528 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d4c55f4b-h9l8z" Sep 13 00:07:11.560400 kubelet[2502]: E0913 00:07:11.558591 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d4c55f4b-h9l8z_calico-system(95fc76f1-d3f0-4dcc-90ad-49b99c0129ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d4c55f4b-h9l8z_calico-system(95fc76f1-d3f0-4dcc-90ad-49b99c0129ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d4c55f4b-h9l8z" podUID="95fc76f1-d3f0-4dcc-90ad-49b99c0129ac" Sep 13 00:07:12.022044 systemd[1]: Created slice kubepods-besteffort-pod44b8980a_9966_4148_82d4_7b2506ae2042.slice - libcontainer container kubepods-besteffort-pod44b8980a_9966_4148_82d4_7b2506ae2042.slice. Sep 13 00:07:12.025474 containerd[1472]: time="2025-09-13T00:07:12.025426282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vq6x,Uid:44b8980a-9966-4148-82d4-7b2506ae2042,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:12.128458 containerd[1472]: time="2025-09-13T00:07:12.128191674Z" level=error msg="Failed to destroy network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.131745 containerd[1472]: time="2025-09-13T00:07:12.129160445Z" level=error msg="encountered an error cleaning up failed sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.131745 containerd[1472]: time="2025-09-13T00:07:12.129223054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vq6x,Uid:44b8980a-9966-4148-82d4-7b2506ae2042,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.131945 kubelet[2502]: E0913 00:07:12.129895 2502 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.131945 kubelet[2502]: E0913 00:07:12.129965 2502 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:12.131945 kubelet[2502]: E0913 00:07:12.129988 2502 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vq6x" Sep 13 00:07:12.132290 kubelet[2502]: E0913 00:07:12.130048 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vq6x_calico-system(44b8980a-9966-4148-82d4-7b2506ae2042)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vq6x_calico-system(44b8980a-9966-4148-82d4-7b2506ae2042)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:12.134455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36-shm.mount: Deactivated successfully. Sep 13 00:07:12.149600 kubelet[2502]: I0913 00:07:12.149569 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:12.153116 kubelet[2502]: I0913 00:07:12.152224 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:12.157906 containerd[1472]: time="2025-09-13T00:07:12.156918972Z" level=info msg="StopPodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\"" Sep 13 00:07:12.159651 containerd[1472]: time="2025-09-13T00:07:12.158382449Z" level=info msg="StopPodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\"" Sep 13 00:07:12.162589 containerd[1472]: time="2025-09-13T00:07:12.161920032Z" level=info msg="Ensure that sandbox 8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa in task-service has been cleanup successfully" Sep 13 00:07:12.163348 containerd[1472]: time="2025-09-13T00:07:12.161919790Z" level=info msg="Ensure that sandbox c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def in task-service has been cleanup successfully" Sep 13 00:07:12.168087 kubelet[2502]: I0913 00:07:12.167885 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:12.170711 containerd[1472]: time="2025-09-13T00:07:12.170579143Z" level=info msg="StopPodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\"" Sep 13 00:07:12.171720 containerd[1472]: time="2025-09-13T00:07:12.171456811Z" level=info msg="Ensure that sandbox a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47 in task-service has been cleanup successfully" Sep 13 00:07:12.177241 kubelet[2502]: I0913 00:07:12.177055 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:12.179025 containerd[1472]: time="2025-09-13T00:07:12.178984634Z" level=info msg="StopPodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\"" Sep 13 00:07:12.179372 containerd[1472]: time="2025-09-13T00:07:12.179203840Z" level=info msg="Ensure that sandbox 6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9 in task-service has been cleanup successfully" Sep 13 00:07:12.186706 kubelet[2502]: I0913 00:07:12.186630 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:12.190271 containerd[1472]: time="2025-09-13T00:07:12.190179617Z" level=info msg="StopPodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\"" Sep 13 00:07:12.190617 containerd[1472]: time="2025-09-13T00:07:12.190391713Z" level=info msg="Ensure that sandbox f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d in task-service has been cleanup successfully" Sep 13 00:07:12.194001 kubelet[2502]: I0913 00:07:12.193965 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:12.194838 containerd[1472]: time="2025-09-13T00:07:12.194773006Z" level=info msg="StopPodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\"" Sep 13 00:07:12.196231 containerd[1472]: time="2025-09-13T00:07:12.196176876Z" level=info msg="Ensure that sandbox 36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64 in task-service has been cleanup successfully" Sep 13 00:07:12.202435 kubelet[2502]: I0913 00:07:12.202396 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:12.204335 containerd[1472]: time="2025-09-13T00:07:12.204186913Z" level=info msg="StopPodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\"" Sep 13 00:07:12.204675 containerd[1472]: time="2025-09-13T00:07:12.204627094Z" level=info msg="Ensure that sandbox 2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed in task-service has been cleanup successfully" Sep 13 00:07:12.209266 kubelet[2502]: I0913 00:07:12.209194 2502 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:12.215528 containerd[1472]: time="2025-09-13T00:07:12.215405331Z" level=info msg="StopPodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\"" Sep 13 00:07:12.215672 containerd[1472]: time="2025-09-13T00:07:12.215592909Z" level=info msg="Ensure that sandbox abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36 in task-service has been cleanup successfully" Sep 13 00:07:12.342787 containerd[1472]: time="2025-09-13T00:07:12.342692289Z" level=error msg="StopPodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" failed" error="failed to destroy network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.343654 kubelet[2502]: E0913 00:07:12.343307 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:12.344232 kubelet[2502]: E0913 00:07:12.343762 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def"} Sep 13 00:07:12.344642 kubelet[2502]: E0913 00:07:12.344432 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"557dbfe6-fe21-439e-9b11-49c1ef45886a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.344642 kubelet[2502]: E0913 00:07:12.344486 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"557dbfe6-fe21-439e-9b11-49c1ef45886a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" podUID="557dbfe6-fe21-439e-9b11-49c1ef45886a" Sep 13 00:07:12.345563 kubelet[2502]: E0913 00:07:12.345376 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:12.345563 kubelet[2502]: E0913 00:07:12.345423 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa"} Sep 13 00:07:12.345563 kubelet[2502]: E0913 00:07:12.345463 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b3b4717-bce4-4f0b-a8c4-81a0865218a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.345563 kubelet[2502]: E0913 00:07:12.345497 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b3b4717-bce4-4f0b-a8c4-81a0865218a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zf5bf" podUID="9b3b4717-bce4-4f0b-a8c4-81a0865218a2" Sep 13 00:07:12.345912 containerd[1472]: time="2025-09-13T00:07:12.344720040Z" level=error msg="StopPodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" failed" error="failed to destroy network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.358933 containerd[1472]: time="2025-09-13T00:07:12.358040995Z" level=error msg="StopPodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" failed" error="failed to destroy network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.359087 kubelet[2502]: E0913 00:07:12.358717 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:12.359087 kubelet[2502]: E0913 00:07:12.358794 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64"} Sep 13 00:07:12.359087 kubelet[2502]: E0913 00:07:12.358836 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ea70e26-63af-4814-b06f-8478ac02d9b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.359087 kubelet[2502]: E0913 00:07:12.358884 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ea70e26-63af-4814-b06f-8478ac02d9b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" podUID="7ea70e26-63af-4814-b06f-8478ac02d9b8" Sep 13 00:07:12.370031 containerd[1472]: time="2025-09-13T00:07:12.369420960Z" level=error msg="StopPodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" failed" error="failed to destroy network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.370896 kubelet[2502]: E0913 00:07:12.370707 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:12.370896 kubelet[2502]: E0913 00:07:12.370773 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47"} Sep 13 00:07:12.370896 kubelet[2502]: E0913 00:07:12.370812 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7cad2b0-ab12-4053-971a-70254e4b0fc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.370896 kubelet[2502]: E0913 00:07:12.370840 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7cad2b0-ab12-4053-971a-70254e4b0fc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qmk6c" podUID="b7cad2b0-ab12-4053-971a-70254e4b0fc9" Sep 13 00:07:12.373600 containerd[1472]: time="2025-09-13T00:07:12.372313164Z" level=error msg="StopPodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" failed" error="failed to destroy network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.373880 kubelet[2502]: E0913 00:07:12.372758 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:12.373880 kubelet[2502]: E0913 00:07:12.373279 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9"} Sep 13 00:07:12.373880 kubelet[2502]: E0913 00:07:12.373321 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.373880 kubelet[2502]: E0913 00:07:12.373347 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-rm7dn" podUID="b0ed41cd-297f-49fe-95fb-ff8bf9f215d9" Sep 13 00:07:12.388905 containerd[1472]: time="2025-09-13T00:07:12.388325970Z" level=error msg="StopPodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" failed" error="failed to destroy network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.389507 kubelet[2502]: E0913 00:07:12.389262 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:12.389507 kubelet[2502]: E0913 00:07:12.389342 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed"} Sep 13 00:07:12.389507 kubelet[2502]: E0913 00:07:12.389399 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.389507 kubelet[2502]: E0913 00:07:12.389443 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d4c55f4b-h9l8z" podUID="95fc76f1-d3f0-4dcc-90ad-49b99c0129ac" Sep 13 00:07:12.392971 containerd[1472]: time="2025-09-13T00:07:12.392580893Z" level=error msg="StopPodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" failed" error="failed to destroy network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.393833 kubelet[2502]: E0913 00:07:12.393447 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:12.393833 kubelet[2502]: E0913 00:07:12.393522 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36"} Sep 13 00:07:12.393833 kubelet[2502]: E0913 00:07:12.393582 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44b8980a-9966-4148-82d4-7b2506ae2042\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.393833 kubelet[2502]: E0913 00:07:12.393620 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44b8980a-9966-4148-82d4-7b2506ae2042\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vq6x" podUID="44b8980a-9966-4148-82d4-7b2506ae2042" Sep 13 00:07:12.397577 containerd[1472]: time="2025-09-13T00:07:12.397214095Z" level=error msg="StopPodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" failed" error="failed to destroy network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:07:12.398501 kubelet[2502]: E0913 00:07:12.398095 2502 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:12.398501 kubelet[2502]: E0913 00:07:12.398172 2502 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d"} Sep 13 00:07:12.398501 kubelet[2502]: E0913 00:07:12.398223 2502 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86461376-5af6-4498-96f8-f26687392906\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:07:12.398501 kubelet[2502]: E0913 00:07:12.398258 2502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86461376-5af6-4498-96f8-f26687392906\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" podUID="86461376-5af6-4498-96f8-f26687392906" Sep 13 00:07:16.537041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747404664.mount: Deactivated successfully. Sep 13 00:07:16.674325 containerd[1472]: time="2025-09-13T00:07:16.674024158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.674325 containerd[1472]: time="2025-09-13T00:07:16.674200300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:07:16.682189 containerd[1472]: time="2025-09-13T00:07:16.681962916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.5155405s" Sep 13 00:07:16.682189 containerd[1472]: time="2025-09-13T00:07:16.682027323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:07:16.696703 containerd[1472]: time="2025-09-13T00:07:16.696221599Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.697075 containerd[1472]: time="2025-09-13T00:07:16.697050960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:16.754690 containerd[1472]: time="2025-09-13T00:07:16.754599675Z" level=info msg="CreateContainer within sandbox \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:07:16.850798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1709428733.mount: Deactivated successfully. Sep 13 00:07:16.862932 containerd[1472]: time="2025-09-13T00:07:16.862881646Z" level=info msg="CreateContainer within sandbox \"238b0efce71911426ab727dac8cd5aae5491681e05ee5832b3fdebacc9082015\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b301322dd4a7bf27a2670e7709d7da149ffbd527e52ccfe33ef9e2086f95d02a\"" Sep 13 00:07:16.873307 containerd[1472]: time="2025-09-13T00:07:16.871538461Z" level=info msg="StartContainer for \"b301322dd4a7bf27a2670e7709d7da149ffbd527e52ccfe33ef9e2086f95d02a\"" Sep 13 00:07:16.968200 kubelet[2502]: I0913 00:07:16.967844 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:16.971607 kubelet[2502]: E0913 00:07:16.970582 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:17.195986 systemd[1]: Started cri-containerd-b301322dd4a7bf27a2670e7709d7da149ffbd527e52ccfe33ef9e2086f95d02a.scope - libcontainer container b301322dd4a7bf27a2670e7709d7da149ffbd527e52ccfe33ef9e2086f95d02a. Sep 13 00:07:17.256040 containerd[1472]: time="2025-09-13T00:07:17.255769137Z" level=info msg="StartContainer for \"b301322dd4a7bf27a2670e7709d7da149ffbd527e52ccfe33ef9e2086f95d02a\" returns successfully" Sep 13 00:07:17.316359 kubelet[2502]: E0913 00:07:17.314073 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:17.580872 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:07:17.581080 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:07:17.866790 containerd[1472]: time="2025-09-13T00:07:17.866637152Z" level=info msg="StopPodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\"" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.033 [INFO][3711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.034 [INFO][3711] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" iface="eth0" netns="/var/run/netns/cni-40a7885d-6ec8-6ee5-af36-9b5395180adc" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.035 [INFO][3711] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" iface="eth0" netns="/var/run/netns/cni-40a7885d-6ec8-6ee5-af36-9b5395180adc" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.036 [INFO][3711] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" iface="eth0" netns="/var/run/netns/cni-40a7885d-6ec8-6ee5-af36-9b5395180adc" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.036 [INFO][3711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.036 [INFO][3711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.193 [INFO][3718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.196 [INFO][3718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.197 [INFO][3718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.210 [WARNING][3718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.210 [INFO][3718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.212 [INFO][3718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:18.223736 containerd[1472]: 2025-09-13 00:07:18.215 [INFO][3711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:18.223736 containerd[1472]: time="2025-09-13T00:07:18.220108121Z" level=info msg="TearDown network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" successfully" Sep 13 00:07:18.223736 containerd[1472]: time="2025-09-13T00:07:18.220156016Z" level=info msg="StopPodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" returns successfully" Sep 13 00:07:18.227342 systemd[1]: run-netns-cni\x2d40a7885d\x2d6ec8\x2d6ee5\x2daf36\x2d9b5395180adc.mount: Deactivated successfully. Sep 13 00:07:18.355612 kubelet[2502]: I0913 00:07:18.355103 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-ca-bundle\") pod \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\" (UID: \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\") " Sep 13 00:07:18.355612 kubelet[2502]: I0913 00:07:18.355222 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzxbx\" (UniqueName: \"kubernetes.io/projected/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-kube-api-access-wzxbx\") pod \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\" (UID: \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\") " Sep 13 00:07:18.355612 kubelet[2502]: I0913 00:07:18.355251 2502 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-backend-key-pair\") pod \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\" (UID: \"95fc76f1-d3f0-4dcc-90ad-49b99c0129ac\") " Sep 13 00:07:18.366280 kubelet[2502]: I0913 00:07:18.362659 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "95fc76f1-d3f0-4dcc-90ad-49b99c0129ac" (UID: "95fc76f1-d3f0-4dcc-90ad-49b99c0129ac"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:07:18.373242 kubelet[2502]: I0913 00:07:18.373107 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-kube-api-access-wzxbx" (OuterVolumeSpecName: "kube-api-access-wzxbx") pod "95fc76f1-d3f0-4dcc-90ad-49b99c0129ac" (UID: "95fc76f1-d3f0-4dcc-90ad-49b99c0129ac"). InnerVolumeSpecName "kube-api-access-wzxbx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:07:18.377505 systemd[1]: var-lib-kubelet-pods-95fc76f1\x2dd3f0\x2d4dcc\x2d90ad\x2d49b99c0129ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzxbx.mount: Deactivated successfully. Sep 13 00:07:18.378451 kubelet[2502]: I0913 00:07:18.377970 2502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "95fc76f1-d3f0-4dcc-90ad-49b99c0129ac" (UID: "95fc76f1-d3f0-4dcc-90ad-49b99c0129ac"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:07:18.385568 systemd[1]: var-lib-kubelet-pods-95fc76f1\x2dd3f0\x2d4dcc\x2d90ad\x2d49b99c0129ac-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:07:18.457054 kubelet[2502]: I0913 00:07:18.455848 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-ca-bundle\") on node \"ci-4081.3.5-n-3ba90871da\" DevicePath \"\"" Sep 13 00:07:18.457054 kubelet[2502]: I0913 00:07:18.455889 2502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzxbx\" (UniqueName: \"kubernetes.io/projected/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-kube-api-access-wzxbx\") on node \"ci-4081.3.5-n-3ba90871da\" DevicePath \"\"" Sep 13 00:07:18.457054 kubelet[2502]: I0913 00:07:18.455899 2502 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-3ba90871da\" DevicePath \"\"" Sep 13 00:07:18.646236 systemd[1]: Removed slice kubepods-besteffort-pod95fc76f1_d3f0_4dcc_90ad_49b99c0129ac.slice - libcontainer container kubepods-besteffort-pod95fc76f1_d3f0_4dcc_90ad_49b99c0129ac.slice. Sep 13 00:07:18.733845 kubelet[2502]: I0913 00:07:18.718989 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tszgg" podStartSLOduration=3.113957578 podStartE2EDuration="17.704171607s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="2025-09-13 00:07:02.106252662 +0000 UTC m=+21.246250387" lastFinishedPulling="2025-09-13 00:07:16.696466693 +0000 UTC m=+35.836464416" observedRunningTime="2025-09-13 00:07:18.34854729 +0000 UTC m=+37.488545040" watchObservedRunningTime="2025-09-13 00:07:18.704171607 +0000 UTC m=+37.844169352" Sep 13 00:07:18.834562 systemd[1]: Created slice kubepods-besteffort-pod20e3e5fc_7d59_4c1b_9ca6_1e8a9a60800a.slice - libcontainer container kubepods-besteffort-pod20e3e5fc_7d59_4c1b_9ca6_1e8a9a60800a.slice. Sep 13 00:07:18.963964 kubelet[2502]: I0913 00:07:18.963788 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7dst\" (UniqueName: \"kubernetes.io/projected/20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a-kube-api-access-q7dst\") pod \"whisker-7cc85898cc-cllrw\" (UID: \"20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a\") " pod="calico-system/whisker-7cc85898cc-cllrw" Sep 13 00:07:18.963964 kubelet[2502]: I0913 00:07:18.963855 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a-whisker-backend-key-pair\") pod \"whisker-7cc85898cc-cllrw\" (UID: \"20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a\") " pod="calico-system/whisker-7cc85898cc-cllrw" Sep 13 00:07:18.963964 kubelet[2502]: I0913 00:07:18.963951 2502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a-whisker-ca-bundle\") pod \"whisker-7cc85898cc-cllrw\" (UID: \"20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a\") " pod="calico-system/whisker-7cc85898cc-cllrw" Sep 13 00:07:19.016553 kubelet[2502]: I0913 00:07:19.016488 2502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fc76f1-d3f0-4dcc-90ad-49b99c0129ac" path="/var/lib/kubelet/pods/95fc76f1-d3f0-4dcc-90ad-49b99c0129ac/volumes" Sep 13 00:07:19.140782 containerd[1472]: time="2025-09-13T00:07:19.140616495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc85898cc-cllrw,Uid:20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a,Namespace:calico-system,Attempt:0,}" Sep 13 00:07:19.312232 systemd-networkd[1366]: cali5992b7f9587: Link UP Sep 13 00:07:19.312553 systemd-networkd[1366]: cali5992b7f9587: Gained carrier Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.195 [INFO][3786] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.211 [INFO][3786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0 whisker-7cc85898cc- calico-system 20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a 967 0 2025-09-13 00:07:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cc85898cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da whisker-7cc85898cc-cllrw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5992b7f9587 [] [] }} ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.212 [INFO][3786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.246 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" HandleID="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.246 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" HandleID="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-3ba90871da", "pod":"whisker-7cc85898cc-cllrw", "timestamp":"2025-09-13 00:07:19.246542358 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.246 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.246 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.247 [INFO][3797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.256 [INFO][3797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.266 [INFO][3797] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.271 [INFO][3797] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.274 [INFO][3797] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.276 [INFO][3797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.277 [INFO][3797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.279 [INFO][3797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17 Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.283 [INFO][3797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.292 [INFO][3797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.193/26] block=192.168.125.192/26 handle="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.292 [INFO][3797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.193/26] handle="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.292 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:19.354949 containerd[1472]: 2025-09-13 00:07:19.292 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.193/26] IPv6=[] ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" HandleID="k8s-pod-network.69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.355615 containerd[1472]: 2025-09-13 00:07:19.296 [INFO][3786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0", GenerateName:"whisker-7cc85898cc-", Namespace:"calico-system", SelfLink:"", UID:"20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cc85898cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"whisker-7cc85898cc-cllrw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5992b7f9587", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:19.355615 containerd[1472]: 2025-09-13 00:07:19.296 [INFO][3786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.193/32] ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.355615 containerd[1472]: 2025-09-13 00:07:19.296 [INFO][3786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5992b7f9587 ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.355615 containerd[1472]: 2025-09-13 00:07:19.313 [INFO][3786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.355615 containerd[1472]: 2025-09-13 00:07:19.315 [INFO][3786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0", GenerateName:"whisker-7cc85898cc-", Namespace:"calico-system", SelfLink:"", UID:"20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cc85898cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17", Pod:"whisker-7cc85898cc-cllrw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5992b7f9587", MAC:"32:62:cb:d3:0d:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:19.355615 containerd[1472]: 2025-09-13 00:07:19.338 [INFO][3786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17" Namespace="calico-system" Pod="whisker-7cc85898cc-cllrw" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--7cc85898cc--cllrw-eth0" Sep 13 00:07:19.391034 containerd[1472]: time="2025-09-13T00:07:19.390900390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:19.391034 containerd[1472]: time="2025-09-13T00:07:19.390965244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:19.391034 containerd[1472]: time="2025-09-13T00:07:19.390981192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:19.391300 containerd[1472]: time="2025-09-13T00:07:19.391073813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:19.430492 systemd[1]: Started cri-containerd-69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17.scope - libcontainer container 69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17. Sep 13 00:07:19.538726 containerd[1472]: time="2025-09-13T00:07:19.538494287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc85898cc-cllrw,Uid:20e3e5fc-7d59-4c1b-9ca6-1e8a9a60800a,Namespace:calico-system,Attempt:0,} returns sandbox id \"69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17\"" Sep 13 00:07:19.541712 containerd[1472]: time="2025-09-13T00:07:19.541526076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:07:20.050781 kernel: bpftool[3995]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:07:20.391222 systemd-networkd[1366]: vxlan.calico: Link UP Sep 13 00:07:20.391229 systemd-networkd[1366]: vxlan.calico: Gained carrier Sep 13 00:07:20.672842 systemd-networkd[1366]: cali5992b7f9587: Gained IPv6LL Sep 13 00:07:21.149924 containerd[1472]: time="2025-09-13T00:07:21.149866692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:21.151084 containerd[1472]: time="2025-09-13T00:07:21.151032040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:07:21.151640 containerd[1472]: time="2025-09-13T00:07:21.151165368Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:21.154482 containerd[1472]: time="2025-09-13T00:07:21.154447229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:21.155935 containerd[1472]: time="2025-09-13T00:07:21.155907901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.613991486s" Sep 13 00:07:21.156245 containerd[1472]: time="2025-09-13T00:07:21.156035117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:07:21.161274 containerd[1472]: time="2025-09-13T00:07:21.161123856Z" level=info msg="CreateContainer within sandbox \"69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:07:21.180707 containerd[1472]: time="2025-09-13T00:07:21.178914507Z" level=info msg="CreateContainer within sandbox \"69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"216ec45fde859c5dc8e1b115b75acbf9a63650391e7f3d0369840f81826870a8\"" Sep 13 00:07:21.183028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549682523.mount: Deactivated successfully. Sep 13 00:07:21.186252 containerd[1472]: time="2025-09-13T00:07:21.185858729Z" level=info msg="StartContainer for \"216ec45fde859c5dc8e1b115b75acbf9a63650391e7f3d0369840f81826870a8\"" Sep 13 00:07:21.238001 systemd[1]: Started cri-containerd-216ec45fde859c5dc8e1b115b75acbf9a63650391e7f3d0369840f81826870a8.scope - libcontainer container 216ec45fde859c5dc8e1b115b75acbf9a63650391e7f3d0369840f81826870a8. Sep 13 00:07:21.300372 containerd[1472]: time="2025-09-13T00:07:21.300075941Z" level=info msg="StartContainer for \"216ec45fde859c5dc8e1b115b75acbf9a63650391e7f3d0369840f81826870a8\" returns successfully" Sep 13 00:07:21.302531 containerd[1472]: time="2025-09-13T00:07:21.302422774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:07:22.208459 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Sep 13 00:07:23.014148 containerd[1472]: time="2025-09-13T00:07:23.014090815Z" level=info msg="StopPodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\"" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.114 [INFO][4145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.116 [INFO][4145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" iface="eth0" netns="/var/run/netns/cni-89ed5db4-2985-3e51-6681-0a442e28a57b" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.117 [INFO][4145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" iface="eth0" netns="/var/run/netns/cni-89ed5db4-2985-3e51-6681-0a442e28a57b" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.119 [INFO][4145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" iface="eth0" netns="/var/run/netns/cni-89ed5db4-2985-3e51-6681-0a442e28a57b" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.119 [INFO][4145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.119 [INFO][4145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.188 [INFO][4152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.188 [INFO][4152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.189 [INFO][4152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.204 [WARNING][4152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.204 [INFO][4152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.208 [INFO][4152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:23.224754 containerd[1472]: 2025-09-13 00:07:23.220 [INFO][4145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:23.228764 containerd[1472]: time="2025-09-13T00:07:23.225896357Z" level=info msg="TearDown network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" successfully" Sep 13 00:07:23.228764 containerd[1472]: time="2025-09-13T00:07:23.225951774Z" level=info msg="StopPodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" returns successfully" Sep 13 00:07:23.229012 kubelet[2502]: E0913 00:07:23.228240 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:23.231948 containerd[1472]: time="2025-09-13T00:07:23.230314433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zf5bf,Uid:9b3b4717-bce4-4f0b-a8c4-81a0865218a2,Namespace:kube-system,Attempt:1,}" Sep 13 00:07:23.238669 systemd[1]: run-netns-cni\x2d89ed5db4\x2d2985\x2d3e51\x2d6681\x2d0a442e28a57b.mount: Deactivated successfully. Sep 13 00:07:23.486172 systemd-networkd[1366]: calia3ab12651a1: Link UP Sep 13 00:07:23.486438 systemd-networkd[1366]: calia3ab12651a1: Gained carrier Sep 13 00:07:23.492889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291765004.mount: Deactivated successfully. Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.321 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0 coredns-674b8bbfcf- kube-system 9b3b4717-bce4-4f0b-a8c4-81a0865218a2 991 0 2025-09-13 00:06:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da coredns-674b8bbfcf-zf5bf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3ab12651a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.321 [INFO][4159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.396 [INFO][4171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" HandleID="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.396 [INFO][4171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" HandleID="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-3ba90871da", "pod":"coredns-674b8bbfcf-zf5bf", "timestamp":"2025-09-13 00:07:23.396379443 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.396 [INFO][4171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.396 [INFO][4171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.396 [INFO][4171] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.415 [INFO][4171] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.425 [INFO][4171] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.434 [INFO][4171] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.437 [INFO][4171] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.441 [INFO][4171] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.441 [INFO][4171] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.445 [INFO][4171] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.460 [INFO][4171] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.472 [INFO][4171] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.194/26] block=192.168.125.192/26 handle="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.473 [INFO][4171] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.194/26] handle="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.473 [INFO][4171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:23.531058 containerd[1472]: 2025-09-13 00:07:23.473 [INFO][4171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.194/26] IPv6=[] ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" HandleID="k8s-pod-network.a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.531830 containerd[1472]: 2025-09-13 00:07:23.476 [INFO][4159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b3b4717-bce4-4f0b-a8c4-81a0865218a2", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"coredns-674b8bbfcf-zf5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ab12651a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:23.531830 containerd[1472]: 2025-09-13 00:07:23.476 [INFO][4159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.194/32] ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.531830 containerd[1472]: 2025-09-13 00:07:23.476 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3ab12651a1 ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.531830 containerd[1472]: 2025-09-13 00:07:23.485 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.532835 containerd[1472]: 2025-09-13 00:07:23.491 [INFO][4159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b3b4717-bce4-4f0b-a8c4-81a0865218a2", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b", Pod:"coredns-674b8bbfcf-zf5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ab12651a1", MAC:"46:65:c8:e1:9f:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:23.532835 containerd[1472]: 2025-09-13 00:07:23.525 [INFO][4159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b" Namespace="kube-system" Pod="coredns-674b8bbfcf-zf5bf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:23.559290 containerd[1472]: time="2025-09-13T00:07:23.559206332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:23.563867 containerd[1472]: time="2025-09-13T00:07:23.563743927Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:23.564231 containerd[1472]: time="2025-09-13T00:07:23.564175049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:07:23.567429 containerd[1472]: time="2025-09-13T00:07:23.567393496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:23.569264 containerd[1472]: time="2025-09-13T00:07:23.569230364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.266761337s" Sep 13 00:07:23.569371 containerd[1472]: time="2025-09-13T00:07:23.569266654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:07:23.575570 containerd[1472]: time="2025-09-13T00:07:23.575373787Z" level=info msg="CreateContainer within sandbox \"69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:07:23.593568 containerd[1472]: time="2025-09-13T00:07:23.592909746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:23.593568 containerd[1472]: time="2025-09-13T00:07:23.593013640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:23.593568 containerd[1472]: time="2025-09-13T00:07:23.593030773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:23.593568 containerd[1472]: time="2025-09-13T00:07:23.593265628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:23.598658 containerd[1472]: time="2025-09-13T00:07:23.598609150Z" level=info msg="CreateContainer within sandbox \"69211007c726f6d0f57ef3897a49a85bfa143bc9ffb5ec7785596cab7e0e0f17\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"659947b01729a23d953e230a441763ed4e5b3b5b55affffbd410943a1e1f3dcb\"" Sep 13 00:07:23.599747 containerd[1472]: time="2025-09-13T00:07:23.599668134Z" level=info msg="StartContainer for \"659947b01729a23d953e230a441763ed4e5b3b5b55affffbd410943a1e1f3dcb\"" Sep 13 00:07:23.620880 systemd[1]: Started cri-containerd-a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b.scope - libcontainer container a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b. Sep 13 00:07:23.647049 systemd[1]: Started cri-containerd-659947b01729a23d953e230a441763ed4e5b3b5b55affffbd410943a1e1f3dcb.scope - libcontainer container 659947b01729a23d953e230a441763ed4e5b3b5b55affffbd410943a1e1f3dcb. Sep 13 00:07:23.703704 containerd[1472]: time="2025-09-13T00:07:23.701778406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zf5bf,Uid:9b3b4717-bce4-4f0b-a8c4-81a0865218a2,Namespace:kube-system,Attempt:1,} returns sandbox id \"a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b\"" Sep 13 00:07:23.704856 kubelet[2502]: E0913 00:07:23.704813 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:23.717804 containerd[1472]: time="2025-09-13T00:07:23.717619946Z" level=info msg="CreateContainer within sandbox \"a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:23.742535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925111688.mount: Deactivated successfully. Sep 13 00:07:23.746827 containerd[1472]: time="2025-09-13T00:07:23.745753624Z" level=info msg="CreateContainer within sandbox \"a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6aaa3743e1a91d3335bf1e4bce6b35a75f9d86430566f6fc964119945d09cdd1\"" Sep 13 00:07:23.749161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724697964.mount: Deactivated successfully. Sep 13 00:07:23.751285 containerd[1472]: time="2025-09-13T00:07:23.751160644Z" level=info msg="StartContainer for \"6aaa3743e1a91d3335bf1e4bce6b35a75f9d86430566f6fc964119945d09cdd1\"" Sep 13 00:07:23.761187 containerd[1472]: time="2025-09-13T00:07:23.760751960Z" level=info msg="StartContainer for \"659947b01729a23d953e230a441763ed4e5b3b5b55affffbd410943a1e1f3dcb\" returns successfully" Sep 13 00:07:23.817875 systemd[1]: Started cri-containerd-6aaa3743e1a91d3335bf1e4bce6b35a75f9d86430566f6fc964119945d09cdd1.scope - libcontainer container 6aaa3743e1a91d3335bf1e4bce6b35a75f9d86430566f6fc964119945d09cdd1. Sep 13 00:07:23.855973 containerd[1472]: time="2025-09-13T00:07:23.855937553Z" level=info msg="StartContainer for \"6aaa3743e1a91d3335bf1e4bce6b35a75f9d86430566f6fc964119945d09cdd1\" returns successfully" Sep 13 00:07:24.014997 containerd[1472]: time="2025-09-13T00:07:24.014864556Z" level=info msg="StopPodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\"" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.091 [INFO][4311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.092 [INFO][4311] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" iface="eth0" netns="/var/run/netns/cni-5a778818-6085-6dc5-f931-4144e9592ac0" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.093 [INFO][4311] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" iface="eth0" netns="/var/run/netns/cni-5a778818-6085-6dc5-f931-4144e9592ac0" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.094 [INFO][4311] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" iface="eth0" netns="/var/run/netns/cni-5a778818-6085-6dc5-f931-4144e9592ac0" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.094 [INFO][4311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.095 [INFO][4311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.128 [INFO][4321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.128 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.129 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.140 [WARNING][4321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.140 [INFO][4321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.143 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:24.149027 containerd[1472]: 2025-09-13 00:07:24.146 [INFO][4311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:24.149027 containerd[1472]: time="2025-09-13T00:07:24.148841281Z" level=info msg="TearDown network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" successfully" Sep 13 00:07:24.149027 containerd[1472]: time="2025-09-13T00:07:24.148904982Z" level=info msg="StopPodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" returns successfully" Sep 13 00:07:24.150429 containerd[1472]: time="2025-09-13T00:07:24.150355247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-9nxhh,Uid:7ea70e26-63af-4814-b06f-8478ac02d9b8,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:07:24.301343 systemd-networkd[1366]: cali4b116d854bb: Link UP Sep 13 00:07:24.301624 systemd-networkd[1366]: cali4b116d854bb: Gained carrier Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.212 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0 calico-apiserver-ccb5685db- calico-apiserver 7ea70e26-63af-4814-b06f-8478ac02d9b8 1004 0 2025-09-13 00:06:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccb5685db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da calico-apiserver-ccb5685db-9nxhh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b116d854bb [] [] }} ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.212 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.247 [INFO][4340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" HandleID="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.248 [INFO][4340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" HandleID="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-3ba90871da", "pod":"calico-apiserver-ccb5685db-9nxhh", "timestamp":"2025-09-13 00:07:24.247633154 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.248 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.248 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.248 [INFO][4340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.257 [INFO][4340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.264 [INFO][4340] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.272 [INFO][4340] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.275 [INFO][4340] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.278 [INFO][4340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.278 [INFO][4340] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.281 [INFO][4340] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669 Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.285 [INFO][4340] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.294 [INFO][4340] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.195/26] block=192.168.125.192/26 handle="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.294 [INFO][4340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.195/26] handle="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:24.321595 containerd[1472]: 2025-09-13 00:07:24.294 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:24.325314 containerd[1472]: 2025-09-13 00:07:24.294 [INFO][4340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.195/26] IPv6=[] ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" HandleID="k8s-pod-network.a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.325314 containerd[1472]: 2025-09-13 00:07:24.297 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea70e26-63af-4814-b06f-8478ac02d9b8", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"calico-apiserver-ccb5685db-9nxhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b116d854bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:24.325314 containerd[1472]: 2025-09-13 00:07:24.297 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.195/32] ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.325314 containerd[1472]: 2025-09-13 00:07:24.297 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b116d854bb ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.325314 containerd[1472]: 2025-09-13 00:07:24.300 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.325567 containerd[1472]: 2025-09-13 00:07:24.302 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea70e26-63af-4814-b06f-8478ac02d9b8", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669", Pod:"calico-apiserver-ccb5685db-9nxhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b116d854bb", MAC:"5a:9e:88:8e:d0:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:24.325567 containerd[1472]: 2025-09-13 00:07:24.316 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-9nxhh" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:24.359578 kubelet[2502]: E0913 00:07:24.358344 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:24.372459 containerd[1472]: time="2025-09-13T00:07:24.372126644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:24.372459 containerd[1472]: time="2025-09-13T00:07:24.372208425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:24.372459 containerd[1472]: time="2025-09-13T00:07:24.372228804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:24.372459 containerd[1472]: time="2025-09-13T00:07:24.372327701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:24.388861 kubelet[2502]: I0913 00:07:24.386853 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zf5bf" podStartSLOduration=37.386830182 podStartE2EDuration="37.386830182s" podCreationTimestamp="2025-09-13 00:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:24.384184334 +0000 UTC m=+43.524182093" watchObservedRunningTime="2025-09-13 00:07:24.386830182 +0000 UTC m=+43.526827921" Sep 13 00:07:24.417154 systemd[1]: Started cri-containerd-a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669.scope - libcontainer container a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669. Sep 13 00:07:24.460674 kubelet[2502]: I0913 00:07:24.460318 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7cc85898cc-cllrw" podStartSLOduration=2.428937776 podStartE2EDuration="6.460296278s" podCreationTimestamp="2025-09-13 00:07:18 +0000 UTC" firstStartedPulling="2025-09-13 00:07:19.54099616 +0000 UTC m=+38.680993883" lastFinishedPulling="2025-09-13 00:07:23.572354662 +0000 UTC m=+42.712352385" observedRunningTime="2025-09-13 00:07:24.4238607 +0000 UTC m=+43.563858444" watchObservedRunningTime="2025-09-13 00:07:24.460296278 +0000 UTC m=+43.600294022" Sep 13 00:07:24.540823 containerd[1472]: time="2025-09-13T00:07:24.540782118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-9nxhh,Uid:7ea70e26-63af-4814-b06f-8478ac02d9b8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669\"" Sep 13 00:07:24.544124 containerd[1472]: time="2025-09-13T00:07:24.543836944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:07:24.640383 systemd-networkd[1366]: calia3ab12651a1: Gained IPv6LL Sep 13 00:07:24.655141 systemd[1]: run-netns-cni\x2d5a778818\x2d6085\x2d6dc5\x2df931\x2d4144e9592ac0.mount: Deactivated successfully. Sep 13 00:07:25.026224 containerd[1472]: time="2025-09-13T00:07:25.024836345Z" level=info msg="StopPodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\"" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.086 [INFO][4408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.087 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" iface="eth0" netns="/var/run/netns/cni-84413b7c-086a-92a2-d426-de1e13a34ffa" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.088 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" iface="eth0" netns="/var/run/netns/cni-84413b7c-086a-92a2-d426-de1e13a34ffa" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.088 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" iface="eth0" netns="/var/run/netns/cni-84413b7c-086a-92a2-d426-de1e13a34ffa" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.088 [INFO][4408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.088 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.122 [INFO][4415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.122 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.122 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.130 [WARNING][4415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.131 [INFO][4415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.133 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:25.137518 containerd[1472]: 2025-09-13 00:07:25.135 [INFO][4408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:25.138176 containerd[1472]: time="2025-09-13T00:07:25.138098334Z" level=info msg="TearDown network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" successfully" Sep 13 00:07:25.138176 containerd[1472]: time="2025-09-13T00:07:25.138131192Z" level=info msg="StopPodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" returns successfully" Sep 13 00:07:25.138748 kubelet[2502]: E0913 00:07:25.138642 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:25.141338 containerd[1472]: time="2025-09-13T00:07:25.140903591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qmk6c,Uid:b7cad2b0-ab12-4053-971a-70254e4b0fc9,Namespace:kube-system,Attempt:1,}" Sep 13 00:07:25.142729 systemd[1]: run-netns-cni\x2d84413b7c\x2d086a\x2d92a2\x2dd426\x2dde1e13a34ffa.mount: Deactivated successfully. Sep 13 00:07:25.290472 systemd-networkd[1366]: calid81de1cbc89: Link UP Sep 13 00:07:25.293325 systemd-networkd[1366]: calid81de1cbc89: Gained carrier Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.197 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0 coredns-674b8bbfcf- kube-system b7cad2b0-ab12-4053-971a-70254e4b0fc9 1027 0 2025-09-13 00:06:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da coredns-674b8bbfcf-qmk6c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid81de1cbc89 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.198 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.229 [INFO][4433] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" HandleID="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.229 [INFO][4433] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" HandleID="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f320), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-3ba90871da", "pod":"coredns-674b8bbfcf-qmk6c", "timestamp":"2025-09-13 00:07:25.229668014 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.229 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.229 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.229 [INFO][4433] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.238 [INFO][4433] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.244 [INFO][4433] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.250 [INFO][4433] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.252 [INFO][4433] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.256 [INFO][4433] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.256 [INFO][4433] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.259 [INFO][4433] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4 Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.264 [INFO][4433] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.277 [INFO][4433] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.196/26] block=192.168.125.192/26 handle="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.277 [INFO][4433] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.196/26] handle="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.277 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:25.316994 containerd[1472]: 2025-09-13 00:07:25.277 [INFO][4433] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.196/26] IPv6=[] ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" HandleID="k8s-pod-network.eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.318180 containerd[1472]: 2025-09-13 00:07:25.281 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b7cad2b0-ab12-4053-971a-70254e4b0fc9", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"coredns-674b8bbfcf-qmk6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid81de1cbc89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:25.318180 containerd[1472]: 2025-09-13 00:07:25.281 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.196/32] ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.318180 containerd[1472]: 2025-09-13 00:07:25.281 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid81de1cbc89 ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.318180 containerd[1472]: 2025-09-13 00:07:25.290 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.318429 containerd[1472]: 2025-09-13 00:07:25.290 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b7cad2b0-ab12-4053-971a-70254e4b0fc9", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4", Pod:"coredns-674b8bbfcf-qmk6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid81de1cbc89", MAC:"d2:3a:67:63:dc:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:25.318429 containerd[1472]: 2025-09-13 00:07:25.309 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4" Namespace="kube-system" Pod="coredns-674b8bbfcf-qmk6c" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:25.351880 containerd[1472]: time="2025-09-13T00:07:25.351356135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:25.351880 containerd[1472]: time="2025-09-13T00:07:25.351432778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:25.351880 containerd[1472]: time="2025-09-13T00:07:25.351483086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:25.352517 containerd[1472]: time="2025-09-13T00:07:25.351763533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:25.379948 kubelet[2502]: E0913 00:07:25.379156 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:25.383291 systemd[1]: Started cri-containerd-eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4.scope - libcontainer container eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4. Sep 13 00:07:25.453810 containerd[1472]: time="2025-09-13T00:07:25.453770909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qmk6c,Uid:b7cad2b0-ab12-4053-971a-70254e4b0fc9,Namespace:kube-system,Attempt:1,} returns sandbox id \"eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4\"" Sep 13 00:07:25.455195 kubelet[2502]: E0913 00:07:25.454999 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:25.460709 containerd[1472]: time="2025-09-13T00:07:25.460637055Z" level=info msg="CreateContainer within sandbox \"eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:07:25.475164 containerd[1472]: time="2025-09-13T00:07:25.475002764Z" level=info msg="CreateContainer within sandbox \"eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04f0c488a6ee3ea889c9f01603a5fb7a9d006b1e9c85f7219ad4cbe362061e1e\"" Sep 13 00:07:25.480589 containerd[1472]: time="2025-09-13T00:07:25.480290936Z" level=info msg="StartContainer for \"04f0c488a6ee3ea889c9f01603a5fb7a9d006b1e9c85f7219ad4cbe362061e1e\"" Sep 13 00:07:25.517006 systemd[1]: Started cri-containerd-04f0c488a6ee3ea889c9f01603a5fb7a9d006b1e9c85f7219ad4cbe362061e1e.scope - libcontainer container 04f0c488a6ee3ea889c9f01603a5fb7a9d006b1e9c85f7219ad4cbe362061e1e. Sep 13 00:07:25.558800 containerd[1472]: time="2025-09-13T00:07:25.558744973Z" level=info msg="StartContainer for \"04f0c488a6ee3ea889c9f01603a5fb7a9d006b1e9c85f7219ad4cbe362061e1e\" returns successfully" Sep 13 00:07:25.600039 systemd-networkd[1366]: cali4b116d854bb: Gained IPv6LL Sep 13 00:07:26.015888 containerd[1472]: time="2025-09-13T00:07:26.015348898Z" level=info msg="StopPodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\"" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.152 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.152 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" iface="eth0" netns="/var/run/netns/cni-9dfedc3c-7f71-fafe-67d6-8b858c68e5f8" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.152 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" iface="eth0" netns="/var/run/netns/cni-9dfedc3c-7f71-fafe-67d6-8b858c68e5f8" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.153 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" iface="eth0" netns="/var/run/netns/cni-9dfedc3c-7f71-fafe-67d6-8b858c68e5f8" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.153 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.153 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.204 [INFO][4551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.205 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.205 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.217 [WARNING][4551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.217 [INFO][4551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.220 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:26.229721 containerd[1472]: 2025-09-13 00:07:26.226 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:26.229721 containerd[1472]: time="2025-09-13T00:07:26.229489802Z" level=info msg="TearDown network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" successfully" Sep 13 00:07:26.229721 containerd[1472]: time="2025-09-13T00:07:26.229538053Z" level=info msg="StopPodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" returns successfully" Sep 13 00:07:26.232873 containerd[1472]: time="2025-09-13T00:07:26.231685689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-bjcrf,Uid:557dbfe6-fe21-439e-9b11-49c1ef45886a,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:07:26.242972 systemd[1]: run-netns-cni\x2d9dfedc3c\x2d7f71\x2dfafe\x2d67d6\x2d8b858c68e5f8.mount: Deactivated successfully. Sep 13 00:07:26.393764 kubelet[2502]: E0913 00:07:26.392384 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:26.393764 kubelet[2502]: E0913 00:07:26.393251 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:26.441417 kubelet[2502]: I0913 00:07:26.438647 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qmk6c" podStartSLOduration=39.438625405 podStartE2EDuration="39.438625405s" podCreationTimestamp="2025-09-13 00:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:07:26.438077356 +0000 UTC m=+45.578075101" watchObservedRunningTime="2025-09-13 00:07:26.438625405 +0000 UTC m=+45.578623149" Sep 13 00:07:26.540730 systemd-networkd[1366]: cali02e5979b869: Link UP Sep 13 00:07:26.543824 systemd-networkd[1366]: cali02e5979b869: Gained carrier Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.315 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0 calico-apiserver-ccb5685db- calico-apiserver 557dbfe6-fe21-439e-9b11-49c1ef45886a 1041 0 2025-09-13 00:06:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccb5685db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da calico-apiserver-ccb5685db-bjcrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02e5979b869 [] [] }} ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.316 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.369 [INFO][4569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" HandleID="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.369 [INFO][4569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" HandleID="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-3ba90871da", "pod":"calico-apiserver-ccb5685db-bjcrf", "timestamp":"2025-09-13 00:07:26.369444323 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.369 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.369 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.369 [INFO][4569] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.382 [INFO][4569] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.409 [INFO][4569] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.442 [INFO][4569] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.457 [INFO][4569] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.468 [INFO][4569] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.468 [INFO][4569] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.473 [INFO][4569] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.501 [INFO][4569] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.522 [INFO][4569] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.197/26] block=192.168.125.192/26 handle="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.522 [INFO][4569] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.197/26] handle="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:26.588868 containerd[1472]: 2025-09-13 00:07:26.522 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:26.589580 containerd[1472]: 2025-09-13 00:07:26.522 [INFO][4569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.197/26] IPv6=[] ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" HandleID="k8s-pod-network.d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.589580 containerd[1472]: 2025-09-13 00:07:26.528 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"557dbfe6-fe21-439e-9b11-49c1ef45886a", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"calico-apiserver-ccb5685db-bjcrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02e5979b869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:26.589580 containerd[1472]: 2025-09-13 00:07:26.528 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.197/32] ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.589580 containerd[1472]: 2025-09-13 00:07:26.529 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02e5979b869 ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.589580 containerd[1472]: 2025-09-13 00:07:26.544 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.590142 containerd[1472]: 2025-09-13 00:07:26.549 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"557dbfe6-fe21-439e-9b11-49c1ef45886a", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f", Pod:"calico-apiserver-ccb5685db-bjcrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02e5979b869", MAC:"86:65:a7:45:6f:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:26.590142 containerd[1472]: 2025-09-13 00:07:26.576 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f" Namespace="calico-apiserver" Pod="calico-apiserver-ccb5685db-bjcrf" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:26.692510 containerd[1472]: time="2025-09-13T00:07:26.690492574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:26.693318 containerd[1472]: time="2025-09-13T00:07:26.692512146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:26.693318 containerd[1472]: time="2025-09-13T00:07:26.692545891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:26.693318 containerd[1472]: time="2025-09-13T00:07:26.693091950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:26.758091 systemd[1]: Started cri-containerd-d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f.scope - libcontainer container d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f. Sep 13 00:07:26.857807 containerd[1472]: time="2025-09-13T00:07:26.857671666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccb5685db-bjcrf,Uid:557dbfe6-fe21-439e-9b11-49c1ef45886a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f\"" Sep 13 00:07:26.944599 systemd-networkd[1366]: calid81de1cbc89: Gained IPv6LL Sep 13 00:07:27.015121 containerd[1472]: time="2025-09-13T00:07:27.014097325Z" level=info msg="StopPodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\"" Sep 13 00:07:27.017600 containerd[1472]: time="2025-09-13T00:07:27.017234087Z" level=info msg="StopPodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\"" Sep 13 00:07:27.021537 containerd[1472]: time="2025-09-13T00:07:27.021375657Z" level=info msg="StopPodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\"" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.172 [INFO][4661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.173 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" iface="eth0" netns="/var/run/netns/cni-67d82709-dba3-a5bd-e829-bd1c28eff47d" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.176 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" iface="eth0" netns="/var/run/netns/cni-67d82709-dba3-a5bd-e829-bd1c28eff47d" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.176 [INFO][4661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" iface="eth0" netns="/var/run/netns/cni-67d82709-dba3-a5bd-e829-bd1c28eff47d" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.176 [INFO][4661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.176 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.241 [INFO][4683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.241 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.242 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.255 [WARNING][4683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.255 [INFO][4683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.259 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:27.284568 containerd[1472]: 2025-09-13 00:07:27.268 [INFO][4661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:27.295376 containerd[1472]: time="2025-09-13T00:07:27.294123025Z" level=info msg="TearDown network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" successfully" Sep 13 00:07:27.295376 containerd[1472]: time="2025-09-13T00:07:27.294180302Z" level=info msg="StopPodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" returns successfully" Sep 13 00:07:27.296441 systemd[1]: run-netns-cni\x2d67d82709\x2ddba3\x2da5bd\x2de829\x2dbd1c28eff47d.mount: Deactivated successfully. Sep 13 00:07:27.299600 containerd[1472]: time="2025-09-13T00:07:27.299551427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vq6x,Uid:44b8980a-9966-4148-82d4-7b2506ae2042,Namespace:calico-system,Attempt:1,}" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.166 [INFO][4663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.168 [INFO][4663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" iface="eth0" netns="/var/run/netns/cni-b09ceb98-3b61-8f72-5966-5198800cfdf6" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.169 [INFO][4663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" iface="eth0" netns="/var/run/netns/cni-b09ceb98-3b61-8f72-5966-5198800cfdf6" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.171 [INFO][4663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" iface="eth0" netns="/var/run/netns/cni-b09ceb98-3b61-8f72-5966-5198800cfdf6" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.172 [INFO][4663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.172 [INFO][4663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.323 [INFO][4681] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.324 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.324 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.354 [WARNING][4681] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.354 [INFO][4681] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.358 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:27.365992 containerd[1472]: 2025-09-13 00:07:27.361 [INFO][4663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:27.367959 containerd[1472]: time="2025-09-13T00:07:27.367866772Z" level=info msg="TearDown network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" successfully" Sep 13 00:07:27.368222 containerd[1472]: time="2025-09-13T00:07:27.368107316Z" level=info msg="StopPodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" returns successfully" Sep 13 00:07:27.370601 containerd[1472]: time="2025-09-13T00:07:27.370541547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-767f8569c7-m5z44,Uid:86461376-5af6-4498-96f8-f26687392906,Namespace:calico-system,Attempt:1,}" Sep 13 00:07:27.403094 kubelet[2502]: E0913 00:07:27.402910 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.233 [INFO][4662] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.235 [INFO][4662] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" iface="eth0" netns="/var/run/netns/cni-c76cdc03-3f6f-15f7-ffe7-4c66a8401601" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.236 [INFO][4662] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" iface="eth0" netns="/var/run/netns/cni-c76cdc03-3f6f-15f7-ffe7-4c66a8401601" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.236 [INFO][4662] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" iface="eth0" netns="/var/run/netns/cni-c76cdc03-3f6f-15f7-ffe7-4c66a8401601" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.237 [INFO][4662] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.237 [INFO][4662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.332 [INFO][4694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.332 [INFO][4694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.360 [INFO][4694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.374 [WARNING][4694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.374 [INFO][4694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.385 [INFO][4694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:27.404508 containerd[1472]: 2025-09-13 00:07:27.394 [INFO][4662] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:27.404508 containerd[1472]: time="2025-09-13T00:07:27.404360589Z" level=info msg="TearDown network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" successfully" Sep 13 00:07:27.404508 containerd[1472]: time="2025-09-13T00:07:27.404397439Z" level=info msg="StopPodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" returns successfully" Sep 13 00:07:27.407177 containerd[1472]: time="2025-09-13T00:07:27.407039008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rm7dn,Uid:b0ed41cd-297f-49fe-95fb-ff8bf9f215d9,Namespace:calico-system,Attempt:1,}" Sep 13 00:07:27.707514 systemd[1]: run-netns-cni\x2dc76cdc03\x2d3f6f\x2d15f7\x2dffe7\x2d4c66a8401601.mount: Deactivated successfully. Sep 13 00:07:27.707650 systemd[1]: run-netns-cni\x2db09ceb98\x2d3b61\x2d8f72\x2d5966\x2d5198800cfdf6.mount: Deactivated successfully. Sep 13 00:07:27.732609 systemd-networkd[1366]: cali7a4eded6253: Link UP Sep 13 00:07:27.734190 systemd-networkd[1366]: cali7a4eded6253: Gained carrier Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.478 [INFO][4703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0 csi-node-driver- calico-system 44b8980a-9966-4148-82d4-7b2506ae2042 1063 0 2025-09-13 00:07:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da csi-node-driver-8vq6x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7a4eded6253 [] [] }} ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.480 [INFO][4703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.614 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" HandleID="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.618 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" HandleID="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000355540), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-3ba90871da", "pod":"csi-node-driver-8vq6x", "timestamp":"2025-09-13 00:07:27.614263141 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.618 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.618 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.618 [INFO][4740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.638 [INFO][4740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.652 [INFO][4740] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.668 [INFO][4740] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.673 [INFO][4740] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.678 [INFO][4740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.678 [INFO][4740] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.683 [INFO][4740] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767 Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.692 [INFO][4740] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.709 [INFO][4740] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.198/26] block=192.168.125.192/26 handle="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.709 [INFO][4740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.198/26] handle="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.709 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:27.769818 containerd[1472]: 2025-09-13 00:07:27.709 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.198/26] IPv6=[] ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" HandleID="k8s-pod-network.9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.770925 containerd[1472]: 2025-09-13 00:07:27.723 [INFO][4703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b8980a-9966-4148-82d4-7b2506ae2042", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"csi-node-driver-8vq6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a4eded6253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:27.770925 containerd[1472]: 2025-09-13 00:07:27.724 [INFO][4703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.198/32] ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.770925 containerd[1472]: 2025-09-13 00:07:27.724 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a4eded6253 ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.770925 containerd[1472]: 2025-09-13 00:07:27.734 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.770925 containerd[1472]: 2025-09-13 00:07:27.736 [INFO][4703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b8980a-9966-4148-82d4-7b2506ae2042", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767", Pod:"csi-node-driver-8vq6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a4eded6253", MAC:"be:37:99:b5:27:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:27.770925 containerd[1472]: 2025-09-13 00:07:27.755 [INFO][4703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767" Namespace="calico-system" Pod="csi-node-driver-8vq6x" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:27.872174 containerd[1472]: time="2025-09-13T00:07:27.869429923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:27.872174 containerd[1472]: time="2025-09-13T00:07:27.869538454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:27.872174 containerd[1472]: time="2025-09-13T00:07:27.869561941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.872765 systemd-networkd[1366]: calicf2b7ad19d6: Link UP Sep 13 00:07:27.874359 systemd-networkd[1366]: calicf2b7ad19d6: Gained carrier Sep 13 00:07:27.894158 containerd[1472]: time="2025-09-13T00:07:27.890280989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.560 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0 goldmane-54d579b49d- calico-system b0ed41cd-297f-49fe-95fb-ff8bf9f215d9 1064 0 2025-09-13 00:07:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da goldmane-54d579b49d-rm7dn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicf2b7ad19d6 [] [] }} ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.561 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.651 [INFO][4754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" HandleID="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.651 [INFO][4754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" HandleID="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-3ba90871da", "pod":"goldmane-54d579b49d-rm7dn", "timestamp":"2025-09-13 00:07:27.651202915 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.651 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.711 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.711 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.746 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.775 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.792 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.797 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.803 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.803 [INFO][4754] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.807 [INFO][4754] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03 Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.817 [INFO][4754] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.840 [INFO][4754] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.199/26] block=192.168.125.192/26 handle="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.840 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.199/26] handle="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.840 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:27.955182 containerd[1472]: 2025-09-13 00:07:27.840 [INFO][4754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.199/26] IPv6=[] ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" HandleID="k8s-pod-network.8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.957217 containerd[1472]: 2025-09-13 00:07:27.850 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"goldmane-54d579b49d-rm7dn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf2b7ad19d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:27.957217 containerd[1472]: 2025-09-13 00:07:27.850 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.199/32] ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.957217 containerd[1472]: 2025-09-13 00:07:27.850 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf2b7ad19d6 ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.957217 containerd[1472]: 2025-09-13 00:07:27.874 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.957217 containerd[1472]: 2025-09-13 00:07:27.878 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03", Pod:"goldmane-54d579b49d-rm7dn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf2b7ad19d6", MAC:"66:ed:48:89:24:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:27.957217 containerd[1472]: 2025-09-13 00:07:27.926 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03" Namespace="calico-system" Pod="goldmane-54d579b49d-rm7dn" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:27.968928 systemd[1]: Started cri-containerd-9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767.scope - libcontainer container 9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767. Sep 13 00:07:28.036554 systemd-networkd[1366]: cali02e5979b869: Gained IPv6LL Sep 13 00:07:28.053798 systemd-networkd[1366]: cali4402d01bb21: Link UP Sep 13 00:07:28.060287 systemd-networkd[1366]: cali4402d01bb21: Gained carrier Sep 13 00:07:28.089764 containerd[1472]: time="2025-09-13T00:07:28.087670858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:28.089764 containerd[1472]: time="2025-09-13T00:07:28.087765723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:28.089764 containerd[1472]: time="2025-09-13T00:07:28.087780747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.089764 containerd[1472]: time="2025-09-13T00:07:28.087899276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.521 [INFO][4714] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0 calico-kube-controllers-767f8569c7- calico-system 86461376-5af6-4498-96f8-f26687392906 1062 0 2025-09-13 00:07:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:767f8569c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-3ba90871da calico-kube-controllers-767f8569c7-m5z44 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4402d01bb21 [] [] }} ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.521 [INFO][4714] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.656 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" HandleID="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.657 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" HandleID="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-3ba90871da", "pod":"calico-kube-controllers-767f8569c7-m5z44", "timestamp":"2025-09-13 00:07:27.656530625 +0000 UTC"}, Hostname:"ci-4081.3.5-n-3ba90871da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.657 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.840 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.841 [INFO][4749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-3ba90871da' Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.916 [INFO][4749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.935 [INFO][4749] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.962 [INFO][4749] ipam/ipam.go 511: Trying affinity for 192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.970 [INFO][4749] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.978 [INFO][4749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.192/26 host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.978 [INFO][4749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.125.192/26 handle="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.982 [INFO][4749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99 Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:27.996 [INFO][4749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.125.192/26 handle="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:28.015 [INFO][4749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.125.200/26] block=192.168.125.192/26 handle="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:28.015 [INFO][4749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.200/26] handle="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" host="ci-4081.3.5-n-3ba90871da" Sep 13 00:07:28.106771 containerd[1472]: 2025-09-13 00:07:28.016 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:28.109842 containerd[1472]: 2025-09-13 00:07:28.016 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.200/26] IPv6=[] ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" HandleID="k8s-pod-network.383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.109842 containerd[1472]: 2025-09-13 00:07:28.023 [INFO][4714] cni-plugin/k8s.go 418: Populated endpoint ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0", GenerateName:"calico-kube-controllers-767f8569c7-", Namespace:"calico-system", SelfLink:"", UID:"86461376-5af6-4498-96f8-f26687392906", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"767f8569c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"", Pod:"calico-kube-controllers-767f8569c7-m5z44", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4402d01bb21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:28.109842 containerd[1472]: 2025-09-13 00:07:28.023 [INFO][4714] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.200/32] ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.109842 containerd[1472]: 2025-09-13 00:07:28.023 [INFO][4714] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4402d01bb21 ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.109842 containerd[1472]: 2025-09-13 00:07:28.073 [INFO][4714] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.110083 containerd[1472]: 2025-09-13 00:07:28.075 [INFO][4714] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0", GenerateName:"calico-kube-controllers-767f8569c7-", Namespace:"calico-system", SelfLink:"", UID:"86461376-5af6-4498-96f8-f26687392906", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"767f8569c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99", Pod:"calico-kube-controllers-767f8569c7-m5z44", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4402d01bb21", MAC:"2e:fb:fe:4a:59:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:28.110083 containerd[1472]: 2025-09-13 00:07:28.094 [INFO][4714] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99" Namespace="calico-system" Pod="calico-kube-controllers-767f8569c7-m5z44" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:28.170790 containerd[1472]: time="2025-09-13T00:07:28.170748927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vq6x,Uid:44b8980a-9966-4148-82d4-7b2506ae2042,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767\"" Sep 13 00:07:28.176004 systemd[1]: Started cri-containerd-8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03.scope - libcontainer container 8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03. Sep 13 00:07:28.209813 containerd[1472]: time="2025-09-13T00:07:28.208221595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:07:28.209813 containerd[1472]: time="2025-09-13T00:07:28.208311632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:07:28.209813 containerd[1472]: time="2025-09-13T00:07:28.208326634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.209813 containerd[1472]: time="2025-09-13T00:07:28.208867067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:07:28.270915 systemd[1]: Started cri-containerd-383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99.scope - libcontainer container 383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99. Sep 13 00:07:28.304249 containerd[1472]: time="2025-09-13T00:07:28.303694770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-rm7dn,Uid:b0ed41cd-297f-49fe-95fb-ff8bf9f215d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03\"" Sep 13 00:07:28.367664 containerd[1472]: time="2025-09-13T00:07:28.367498895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-767f8569c7-m5z44,Uid:86461376-5af6-4498-96f8-f26687392906,Namespace:calico-system,Attempt:1,} returns sandbox id \"383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99\"" Sep 13 00:07:28.413415 kubelet[2502]: E0913 00:07:28.413144 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:28.905438 containerd[1472]: time="2025-09-13T00:07:28.903997001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:28.906326 containerd[1472]: time="2025-09-13T00:07:28.906279433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:07:28.907251 containerd[1472]: time="2025-09-13T00:07:28.907197323Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:28.909644 containerd[1472]: time="2025-09-13T00:07:28.909610940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:28.910887 containerd[1472]: time="2025-09-13T00:07:28.910843828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.36695628s" Sep 13 00:07:28.910887 containerd[1472]: time="2025-09-13T00:07:28.910888491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:07:28.914956 containerd[1472]: time="2025-09-13T00:07:28.914787501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:07:28.918079 containerd[1472]: time="2025-09-13T00:07:28.918034240Z" level=info msg="CreateContainer within sandbox \"a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:07:28.943801 containerd[1472]: time="2025-09-13T00:07:28.943675562Z" level=info msg="CreateContainer within sandbox \"a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff31355dbb0e0a9125aefa7c2efb7208fb904b899eb10dfc7c9772ccc52f9227\"" Sep 13 00:07:28.945446 containerd[1472]: time="2025-09-13T00:07:28.945046181Z" level=info msg="StartContainer for \"ff31355dbb0e0a9125aefa7c2efb7208fb904b899eb10dfc7c9772ccc52f9227\"" Sep 13 00:07:29.006415 systemd[1]: Started cri-containerd-ff31355dbb0e0a9125aefa7c2efb7208fb904b899eb10dfc7c9772ccc52f9227.scope - libcontainer container ff31355dbb0e0a9125aefa7c2efb7208fb904b899eb10dfc7c9772ccc52f9227. Sep 13 00:07:29.117283 containerd[1472]: time="2025-09-13T00:07:29.115492929Z" level=info msg="StartContainer for \"ff31355dbb0e0a9125aefa7c2efb7208fb904b899eb10dfc7c9772ccc52f9227\" returns successfully" Sep 13 00:07:29.311888 systemd-networkd[1366]: calicf2b7ad19d6: Gained IPv6LL Sep 13 00:07:29.346902 containerd[1472]: time="2025-09-13T00:07:29.345420032Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:29.346902 containerd[1472]: time="2025-09-13T00:07:29.345761732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:07:29.352732 containerd[1472]: time="2025-09-13T00:07:29.352586114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 437.758097ms" Sep 13 00:07:29.352732 containerd[1472]: time="2025-09-13T00:07:29.352648554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:07:29.355075 containerd[1472]: time="2025-09-13T00:07:29.354976466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:07:29.360155 containerd[1472]: time="2025-09-13T00:07:29.360106367Z" level=info msg="CreateContainer within sandbox \"d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:07:29.376121 systemd-networkd[1366]: cali7a4eded6253: Gained IPv6LL Sep 13 00:07:29.381117 containerd[1472]: time="2025-09-13T00:07:29.381072225Z" level=info msg="CreateContainer within sandbox \"d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"042b89b9cb565531b1ccd61ee5d7dbcce3f6480cb71ef2c752a60e7a74bc32f8\"" Sep 13 00:07:29.382299 containerd[1472]: time="2025-09-13T00:07:29.382114258Z" level=info msg="StartContainer for \"042b89b9cb565531b1ccd61ee5d7dbcce3f6480cb71ef2c752a60e7a74bc32f8\"" Sep 13 00:07:29.440933 systemd[1]: Started cri-containerd-042b89b9cb565531b1ccd61ee5d7dbcce3f6480cb71ef2c752a60e7a74bc32f8.scope - libcontainer container 042b89b9cb565531b1ccd61ee5d7dbcce3f6480cb71ef2c752a60e7a74bc32f8. Sep 13 00:07:29.623871 containerd[1472]: time="2025-09-13T00:07:29.623671294Z" level=info msg="StartContainer for \"042b89b9cb565531b1ccd61ee5d7dbcce3f6480cb71ef2c752a60e7a74bc32f8\" returns successfully" Sep 13 00:07:29.696412 systemd-networkd[1366]: cali4402d01bb21: Gained IPv6LL Sep 13 00:07:29.709128 systemd[1]: run-containerd-runc-k8s.io-ff31355dbb0e0a9125aefa7c2efb7208fb904b899eb10dfc7c9772ccc52f9227-runc.Xgr4Mp.mount: Deactivated successfully. Sep 13 00:07:30.457534 kubelet[2502]: I0913 00:07:30.457499 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:30.478769 kubelet[2502]: I0913 00:07:30.478697 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ccb5685db-9nxhh" podStartSLOduration=29.108355807 podStartE2EDuration="33.478655057s" podCreationTimestamp="2025-09-13 00:06:57 +0000 UTC" firstStartedPulling="2025-09-13 00:07:24.543296336 +0000 UTC m=+43.683294060" lastFinishedPulling="2025-09-13 00:07:28.913595587 +0000 UTC m=+48.053593310" observedRunningTime="2025-09-13 00:07:29.468727006 +0000 UTC m=+48.608724754" watchObservedRunningTime="2025-09-13 00:07:30.478655057 +0000 UTC m=+49.618652801" Sep 13 00:07:31.125043 containerd[1472]: time="2025-09-13T00:07:31.124659150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.127018 containerd[1472]: time="2025-09-13T00:07:31.126385729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:07:31.128521 containerd[1472]: time="2025-09-13T00:07:31.127352827Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.133439 containerd[1472]: time="2025-09-13T00:07:31.133233442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:31.135416 containerd[1472]: time="2025-09-13T00:07:31.134428256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.779403359s" Sep 13 00:07:31.136235 containerd[1472]: time="2025-09-13T00:07:31.135866530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:07:31.137556 containerd[1472]: time="2025-09-13T00:07:31.137530530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:07:31.141097 containerd[1472]: time="2025-09-13T00:07:31.141022224Z" level=info msg="CreateContainer within sandbox \"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:07:31.254130 containerd[1472]: time="2025-09-13T00:07:31.254084826Z" level=info msg="CreateContainer within sandbox \"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"50b2ed3d0c2dcce684520e2e1c9d2857d2879f94bbc66b84f2c6644aebe73193\"" Sep 13 00:07:31.256384 containerd[1472]: time="2025-09-13T00:07:31.254909112Z" level=info msg="StartContainer for \"50b2ed3d0c2dcce684520e2e1c9d2857d2879f94bbc66b84f2c6644aebe73193\"" Sep 13 00:07:31.449586 systemd[1]: run-containerd-runc-k8s.io-50b2ed3d0c2dcce684520e2e1c9d2857d2879f94bbc66b84f2c6644aebe73193-runc.5dEHrE.mount: Deactivated successfully. Sep 13 00:07:31.460077 systemd[1]: Started cri-containerd-50b2ed3d0c2dcce684520e2e1c9d2857d2879f94bbc66b84f2c6644aebe73193.scope - libcontainer container 50b2ed3d0c2dcce684520e2e1c9d2857d2879f94bbc66b84f2c6644aebe73193. Sep 13 00:07:31.479117 kubelet[2502]: I0913 00:07:31.478904 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:31.543028 containerd[1472]: time="2025-09-13T00:07:31.542976488Z" level=info msg="StartContainer for \"50b2ed3d0c2dcce684520e2e1c9d2857d2879f94bbc66b84f2c6644aebe73193\" returns successfully" Sep 13 00:07:33.832157 systemd[1]: Started sshd@7-143.198.49.51:22-139.178.68.195:53458.service - OpenSSH per-connection server daemon (139.178.68.195:53458). Sep 13 00:07:33.982219 sshd[5061]: Accepted publickey for core from 139.178.68.195 port 53458 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:33.987041 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:33.998023 systemd-logind[1445]: New session 8 of user core. Sep 13 00:07:34.005002 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:07:34.783529 sshd[5061]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:34.790097 systemd[1]: sshd@7-143.198.49.51:22-139.178.68.195:53458.service: Deactivated successfully. Sep 13 00:07:34.794070 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:07:34.796716 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:07:34.798257 systemd-logind[1445]: Removed session 8. Sep 13 00:07:35.142063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414346148.mount: Deactivated successfully. Sep 13 00:07:35.919444 containerd[1472]: time="2025-09-13T00:07:35.918915856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:35.923780 containerd[1472]: time="2025-09-13T00:07:35.923250161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:07:35.958220 containerd[1472]: time="2025-09-13T00:07:35.958100307Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:35.960828 containerd[1472]: time="2025-09-13T00:07:35.960528114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:35.961564 containerd[1472]: time="2025-09-13T00:07:35.961351886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.823789018s" Sep 13 00:07:35.961564 containerd[1472]: time="2025-09-13T00:07:35.961381131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:07:35.962992 containerd[1472]: time="2025-09-13T00:07:35.962968069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:07:35.972754 containerd[1472]: time="2025-09-13T00:07:35.972701087Z" level=info msg="CreateContainer within sandbox \"8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:07:36.007187 containerd[1472]: time="2025-09-13T00:07:36.007112827Z" level=info msg="CreateContainer within sandbox \"8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc\"" Sep 13 00:07:36.008091 containerd[1472]: time="2025-09-13T00:07:36.007896914Z" level=info msg="StartContainer for \"6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc\"" Sep 13 00:07:36.244098 systemd[1]: Started cri-containerd-6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc.scope - libcontainer container 6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc. Sep 13 00:07:36.316596 containerd[1472]: time="2025-09-13T00:07:36.316537762Z" level=info msg="StartContainer for \"6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc\" returns successfully" Sep 13 00:07:36.658523 kubelet[2502]: I0913 00:07:36.657283 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:07:37.037763 kubelet[2502]: I0913 00:07:37.031428 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ccb5685db-bjcrf" podStartSLOduration=37.532510212 podStartE2EDuration="40.026714827s" podCreationTimestamp="2025-09-13 00:06:57 +0000 UTC" firstStartedPulling="2025-09-13 00:07:26.860141826 +0000 UTC m=+46.000139549" lastFinishedPulling="2025-09-13 00:07:29.354346428 +0000 UTC m=+48.494344164" observedRunningTime="2025-09-13 00:07:30.481896769 +0000 UTC m=+49.621894513" watchObservedRunningTime="2025-09-13 00:07:37.026714827 +0000 UTC m=+56.166712571" Sep 13 00:07:37.037763 kubelet[2502]: I0913 00:07:37.037504 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-rm7dn" podStartSLOduration=29.38226803 podStartE2EDuration="37.037481308s" podCreationTimestamp="2025-09-13 00:07:00 +0000 UTC" firstStartedPulling="2025-09-13 00:07:28.30720942 +0000 UTC m=+47.447207168" lastFinishedPulling="2025-09-13 00:07:35.962422722 +0000 UTC m=+55.102420446" observedRunningTime="2025-09-13 00:07:36.84512583 +0000 UTC m=+55.985123572" watchObservedRunningTime="2025-09-13 00:07:37.037481308 +0000 UTC m=+56.177479053" Sep 13 00:07:37.742934 systemd[1]: run-containerd-runc-k8s.io-6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc-runc.canOpb.mount: Deactivated successfully. Sep 13 00:07:38.653176 systemd[1]: run-containerd-runc-k8s.io-6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc-runc.NYlK94.mount: Deactivated successfully. Sep 13 00:07:39.416078 containerd[1472]: time="2025-09-13T00:07:39.416011086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:39.417298 containerd[1472]: time="2025-09-13T00:07:39.417246511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:07:39.418228 containerd[1472]: time="2025-09-13T00:07:39.418191556Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:39.420220 containerd[1472]: time="2025-09-13T00:07:39.420169295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:39.421567 containerd[1472]: time="2025-09-13T00:07:39.421517746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.457283307s" Sep 13 00:07:39.421567 containerd[1472]: time="2025-09-13T00:07:39.421563157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:07:39.481853 containerd[1472]: time="2025-09-13T00:07:39.481548555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:07:39.697062 containerd[1472]: time="2025-09-13T00:07:39.696927758Z" level=info msg="CreateContainer within sandbox \"383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:07:39.716464 containerd[1472]: time="2025-09-13T00:07:39.716404370Z" level=info msg="CreateContainer within sandbox \"383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d61d286bf90a017e150c0e7c810217d92d8eb416af83c79b5b441b06e5c89174\"" Sep 13 00:07:39.759057 containerd[1472]: time="2025-09-13T00:07:39.758001218Z" level=info msg="StartContainer for \"d61d286bf90a017e150c0e7c810217d92d8eb416af83c79b5b441b06e5c89174\"" Sep 13 00:07:39.859988 systemd[1]: Started cri-containerd-d61d286bf90a017e150c0e7c810217d92d8eb416af83c79b5b441b06e5c89174.scope - libcontainer container d61d286bf90a017e150c0e7c810217d92d8eb416af83c79b5b441b06e5c89174. Sep 13 00:07:39.862046 systemd[1]: Started sshd@8-143.198.49.51:22-139.178.68.195:53460.service - OpenSSH per-connection server daemon (139.178.68.195:53460). Sep 13 00:07:39.979188 containerd[1472]: time="2025-09-13T00:07:39.978003913Z" level=info msg="StartContainer for \"d61d286bf90a017e150c0e7c810217d92d8eb416af83c79b5b441b06e5c89174\" returns successfully" Sep 13 00:07:40.061140 sshd[5194]: Accepted publickey for core from 139.178.68.195 port 53460 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:40.066332 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:40.075776 systemd-logind[1445]: New session 9 of user core. Sep 13 00:07:40.087063 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:07:40.706626 sshd[5194]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:40.711952 systemd[1]: sshd@8-143.198.49.51:22-139.178.68.195:53460.service: Deactivated successfully. Sep 13 00:07:40.714778 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:07:40.717957 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:07:40.719582 systemd-logind[1445]: Removed session 9. Sep 13 00:07:40.886518 kubelet[2502]: I0913 00:07:40.886415 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-767f8569c7-m5z44" podStartSLOduration=28.77672554 podStartE2EDuration="39.870893101s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="2025-09-13 00:07:28.3708402 +0000 UTC m=+47.510837923" lastFinishedPulling="2025-09-13 00:07:39.465007737 +0000 UTC m=+58.605005484" observedRunningTime="2025-09-13 00:07:40.869347557 +0000 UTC m=+60.009345301" watchObservedRunningTime="2025-09-13 00:07:40.870893101 +0000 UTC m=+60.010890871" Sep 13 00:07:40.904651 systemd[1]: run-containerd-runc-k8s.io-d61d286bf90a017e150c0e7c810217d92d8eb416af83c79b5b441b06e5c89174-runc.xv9nlV.mount: Deactivated successfully. Sep 13 00:07:41.178336 containerd[1472]: time="2025-09-13T00:07:41.177883706Z" level=info msg="StopPodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\"" Sep 13 00:07:41.799843 containerd[1472]: time="2025-09-13T00:07:41.799798874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:41.801358 containerd[1472]: time="2025-09-13T00:07:41.801307222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:07:41.802998 containerd[1472]: time="2025-09-13T00:07:41.802073467Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:41.804458 containerd[1472]: time="2025-09-13T00:07:41.804414764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:07:41.810998 containerd[1472]: time="2025-09-13T00:07:41.810861041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.329261712s" Sep 13 00:07:41.811204 containerd[1472]: time="2025-09-13T00:07:41.811181368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.633 [WARNING][5275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b7cad2b0-ab12-4053-971a-70254e4b0fc9", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4", Pod:"coredns-674b8bbfcf-qmk6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid81de1cbc89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.640 [INFO][5275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.640 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" iface="eth0" netns="" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.641 [INFO][5275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.641 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.861 [INFO][5282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.864 [INFO][5282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.864 [INFO][5282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.884 [WARNING][5282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.884 [INFO][5282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.890 [INFO][5282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:41.899599 containerd[1472]: 2025-09-13 00:07:41.895 [INFO][5275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:41.902201 containerd[1472]: time="2025-09-13T00:07:41.900256486Z" level=info msg="TearDown network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" successfully" Sep 13 00:07:41.902201 containerd[1472]: time="2025-09-13T00:07:41.900285537Z" level=info msg="StopPodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" returns successfully" Sep 13 00:07:41.967764 containerd[1472]: time="2025-09-13T00:07:41.967120407Z" level=info msg="CreateContainer within sandbox \"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:07:41.996219 containerd[1472]: time="2025-09-13T00:07:41.995237724Z" level=info msg="CreateContainer within sandbox \"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cd74cc8e0bf574a9ab5bdf1b5296a9828d30a78425aabf0c4033589f9f7be4d9\"" Sep 13 00:07:41.995858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819085820.mount: Deactivated successfully. Sep 13 00:07:42.007065 containerd[1472]: time="2025-09-13T00:07:42.006446124Z" level=info msg="StartContainer for \"cd74cc8e0bf574a9ab5bdf1b5296a9828d30a78425aabf0c4033589f9f7be4d9\"" Sep 13 00:07:42.072896 systemd[1]: Started cri-containerd-cd74cc8e0bf574a9ab5bdf1b5296a9828d30a78425aabf0c4033589f9f7be4d9.scope - libcontainer container cd74cc8e0bf574a9ab5bdf1b5296a9828d30a78425aabf0c4033589f9f7be4d9. Sep 13 00:07:42.138782 containerd[1472]: time="2025-09-13T00:07:42.137753358Z" level=info msg="RemovePodSandbox for \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\"" Sep 13 00:07:42.141012 containerd[1472]: time="2025-09-13T00:07:42.140364683Z" level=info msg="Forcibly stopping sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\"" Sep 13 00:07:42.165963 containerd[1472]: time="2025-09-13T00:07:42.165787794Z" level=info msg="StartContainer for \"cd74cc8e0bf574a9ab5bdf1b5296a9828d30a78425aabf0c4033589f9f7be4d9\" returns successfully" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.205 [WARNING][5323] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b7cad2b0-ab12-4053-971a-70254e4b0fc9", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"eebff117c28dbe4cc90372ecda10f197a48aa873ae3dd19cf61663c8b178c4f4", Pod:"coredns-674b8bbfcf-qmk6c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid81de1cbc89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.205 [INFO][5323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.205 [INFO][5323] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" iface="eth0" netns="" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.205 [INFO][5323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.206 [INFO][5323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.235 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.235 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.236 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.244 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.244 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" HandleID="k8s-pod-network.a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--qmk6c-eth0" Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.247 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:42.255661 containerd[1472]: 2025-09-13 00:07:42.251 [INFO][5323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47" Sep 13 00:07:42.257890 containerd[1472]: time="2025-09-13T00:07:42.256537613Z" level=info msg="TearDown network for sandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" successfully" Sep 13 00:07:42.287891 containerd[1472]: time="2025-09-13T00:07:42.287809654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:42.297402 containerd[1472]: time="2025-09-13T00:07:42.297350685Z" level=info msg="RemovePodSandbox \"a3cc3743c8549d1796d38d9a10a921396108baabb54bb1804d399c5307eb9a47\" returns successfully" Sep 13 00:07:42.300919 containerd[1472]: time="2025-09-13T00:07:42.300590249Z" level=info msg="StopPodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\"" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.352 [WARNING][5355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b8980a-9966-4148-82d4-7b2506ae2042", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767", Pod:"csi-node-driver-8vq6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a4eded6253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.352 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.352 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" iface="eth0" netns="" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.352 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.352 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.383 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.383 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.383 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.390 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.390 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.392 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:42.398049 containerd[1472]: 2025-09-13 00:07:42.395 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.399819 containerd[1472]: time="2025-09-13T00:07:42.398474864Z" level=info msg="TearDown network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" successfully" Sep 13 00:07:42.399819 containerd[1472]: time="2025-09-13T00:07:42.398524075Z" level=info msg="StopPodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" returns successfully" Sep 13 00:07:42.399819 containerd[1472]: time="2025-09-13T00:07:42.399236329Z" level=info msg="RemovePodSandbox for \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\"" Sep 13 00:07:42.399819 containerd[1472]: time="2025-09-13T00:07:42.399266551Z" level=info msg="Forcibly stopping sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\"" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.462 [WARNING][5376] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"44b8980a-9966-4148-82d4-7b2506ae2042", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"9b9948b3504febe793096fd23c27fec4056677c61fd3b82b340ce6e07f1f1767", Pod:"csi-node-driver-8vq6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a4eded6253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.462 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.462 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" iface="eth0" netns="" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.462 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.462 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.513 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.514 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.514 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.525 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.525 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" HandleID="k8s-pod-network.abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Workload="ci--4081.3.5--n--3ba90871da-k8s-csi--node--driver--8vq6x-eth0" Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.530 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:42.539036 containerd[1472]: 2025-09-13 00:07:42.536 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36" Sep 13 00:07:42.539514 containerd[1472]: time="2025-09-13T00:07:42.539082066Z" level=info msg="TearDown network for sandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" successfully" Sep 13 00:07:42.542590 containerd[1472]: time="2025-09-13T00:07:42.542553330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:42.542786 containerd[1472]: time="2025-09-13T00:07:42.542633380Z" level=info msg="RemovePodSandbox \"abfd5674799396b0aa8e9f687a6a05ed72ec30a400816a2fae5319d597e29c36\" returns successfully" Sep 13 00:07:42.543940 containerd[1472]: time="2025-09-13T00:07:42.543905836Z" level=info msg="StopPodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\"" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.642 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b3b4717-bce4-4f0b-a8c4-81a0865218a2", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b", Pod:"coredns-674b8bbfcf-zf5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ab12651a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.643 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.643 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" iface="eth0" netns="" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.643 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.643 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.680 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.680 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.680 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.690 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.690 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.692 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:42.698789 containerd[1472]: 2025-09-13 00:07:42.696 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.699339 containerd[1472]: time="2025-09-13T00:07:42.698814695Z" level=info msg="TearDown network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" successfully" Sep 13 00:07:42.699339 containerd[1472]: time="2025-09-13T00:07:42.698854046Z" level=info msg="StopPodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" returns successfully" Sep 13 00:07:42.702215 containerd[1472]: time="2025-09-13T00:07:42.701357758Z" level=info msg="RemovePodSandbox for \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\"" Sep 13 00:07:42.702215 containerd[1472]: time="2025-09-13T00:07:42.701392967Z" level=info msg="Forcibly stopping sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\"" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.781 [WARNING][5418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b3b4717-bce4-4f0b-a8c4-81a0865218a2", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"a40c8ed5727ac9873f4f7672515edc9a9d9e17867bc501b8c7ebc0843171228b", Pod:"coredns-674b8bbfcf-zf5bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ab12651a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.782 [INFO][5418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.782 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" iface="eth0" netns="" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.782 [INFO][5418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.782 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.810 [INFO][5425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.810 [INFO][5425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.810 [INFO][5425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.819 [WARNING][5425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.819 [INFO][5425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" HandleID="k8s-pod-network.8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Workload="ci--4081.3.5--n--3ba90871da-k8s-coredns--674b8bbfcf--zf5bf-eth0" Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.823 [INFO][5425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:42.832016 containerd[1472]: 2025-09-13 00:07:42.826 [INFO][5418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa" Sep 13 00:07:42.832016 containerd[1472]: time="2025-09-13T00:07:42.830970805Z" level=info msg="TearDown network for sandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" successfully" Sep 13 00:07:42.844337 containerd[1472]: time="2025-09-13T00:07:42.844209501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:42.844337 containerd[1472]: time="2025-09-13T00:07:42.844315041Z" level=info msg="RemovePodSandbox \"8e88fe38fe518c8c1b490d07285850daf2d2e744cc391a9197031d67ffd1cdaa\" returns successfully" Sep 13 00:07:42.846338 containerd[1472]: time="2025-09-13T00:07:42.846293832Z" level=info msg="StopPodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\"" Sep 13 00:07:42.867973 kubelet[2502]: I0913 00:07:42.867631 2502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8vq6x" podStartSLOduration=28.199751684 podStartE2EDuration="41.867590283s" podCreationTimestamp="2025-09-13 00:07:01 +0000 UTC" firstStartedPulling="2025-09-13 00:07:28.175580744 +0000 UTC m=+47.315578467" lastFinishedPulling="2025-09-13 00:07:41.843419313 +0000 UTC m=+60.983417066" observedRunningTime="2025-09-13 00:07:42.864522011 +0000 UTC m=+62.004519750" watchObservedRunningTime="2025-09-13 00:07:42.867590283 +0000 UTC m=+62.007588028" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.923 [WARNING][5440] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea70e26-63af-4814-b06f-8478ac02d9b8", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669", Pod:"calico-apiserver-ccb5685db-9nxhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b116d854bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.924 [INFO][5440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.924 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" iface="eth0" netns="" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.924 [INFO][5440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.924 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.962 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.963 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.963 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.972 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.972 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.974 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:42.981671 containerd[1472]: 2025-09-13 00:07:42.977 [INFO][5440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:42.981671 containerd[1472]: time="2025-09-13T00:07:42.981222458Z" level=info msg="TearDown network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" successfully" Sep 13 00:07:42.981671 containerd[1472]: time="2025-09-13T00:07:42.981252175Z" level=info msg="StopPodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" returns successfully" Sep 13 00:07:42.983950 containerd[1472]: time="2025-09-13T00:07:42.983913108Z" level=info msg="RemovePodSandbox for \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\"" Sep 13 00:07:42.984028 containerd[1472]: time="2025-09-13T00:07:42.983982449Z" level=info msg="Forcibly stopping sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\"" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.041 [WARNING][5461] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ea70e26-63af-4814-b06f-8478ac02d9b8", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"a74ca502a4309b8168971d1d895de94d3ebdf7dbb2b826dfefc26ef92c6db669", Pod:"calico-apiserver-ccb5685db-9nxhh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b116d854bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.042 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.042 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" iface="eth0" netns="" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.042 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.043 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.076 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.076 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.076 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.084 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.084 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" HandleID="k8s-pod-network.36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--9nxhh-eth0" Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.087 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.095422 containerd[1472]: 2025-09-13 00:07:43.091 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64" Sep 13 00:07:43.097150 containerd[1472]: time="2025-09-13T00:07:43.096334030Z" level=info msg="TearDown network for sandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" successfully" Sep 13 00:07:43.100996 containerd[1472]: time="2025-09-13T00:07:43.100783013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:43.101346 containerd[1472]: time="2025-09-13T00:07:43.101192890Z" level=info msg="RemovePodSandbox \"36300c5bd8d08b551d2fc99bbe2cda6a18f9e5fc70c2a65e06ccbe78fbddad64\" returns successfully" Sep 13 00:07:43.101789 containerd[1472]: time="2025-09-13T00:07:43.101753955Z" level=info msg="StopPodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\"" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.179 [WARNING][5483] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.179 [INFO][5483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.179 [INFO][5483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" iface="eth0" netns="" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.179 [INFO][5483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.179 [INFO][5483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.218 [INFO][5491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.218 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.218 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.231 [WARNING][5491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.232 [INFO][5491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.235 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.244010 containerd[1472]: 2025-09-13 00:07:43.239 [INFO][5483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.244010 containerd[1472]: time="2025-09-13T00:07:43.243870580Z" level=info msg="TearDown network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" successfully" Sep 13 00:07:43.244010 containerd[1472]: time="2025-09-13T00:07:43.243897779Z" level=info msg="StopPodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" returns successfully" Sep 13 00:07:43.247809 containerd[1472]: time="2025-09-13T00:07:43.245750876Z" level=info msg="RemovePodSandbox for \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\"" Sep 13 00:07:43.247809 containerd[1472]: time="2025-09-13T00:07:43.245784353Z" level=info msg="Forcibly stopping sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\"" Sep 13 00:07:43.301812 kubelet[2502]: I0913 00:07:43.298620 2502 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:07:43.304832 kubelet[2502]: I0913 00:07:43.304181 2502 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.347 [WARNING][5505] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" WorkloadEndpoint="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.347 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.348 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" iface="eth0" netns="" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.348 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.348 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.388 [INFO][5512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.388 [INFO][5512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.388 [INFO][5512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.396 [WARNING][5512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.396 [INFO][5512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" HandleID="k8s-pod-network.2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Workload="ci--4081.3.5--n--3ba90871da-k8s-whisker--d4c55f4b--h9l8z-eth0" Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.398 [INFO][5512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.403857 containerd[1472]: 2025-09-13 00:07:43.401 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed" Sep 13 00:07:43.404921 containerd[1472]: time="2025-09-13T00:07:43.403908116Z" level=info msg="TearDown network for sandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" successfully" Sep 13 00:07:43.414415 containerd[1472]: time="2025-09-13T00:07:43.414359085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:43.414568 containerd[1472]: time="2025-09-13T00:07:43.414442684Z" level=info msg="RemovePodSandbox \"2988b655dfeccf356cb2a1e5d1b96188d9c375e7cd02b227ab1a86e3ccb84aed\" returns successfully" Sep 13 00:07:43.415102 containerd[1472]: time="2025-09-13T00:07:43.415077313Z" level=info msg="StopPodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\"" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.475 [WARNING][5526] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0", GenerateName:"calico-kube-controllers-767f8569c7-", Namespace:"calico-system", SelfLink:"", UID:"86461376-5af6-4498-96f8-f26687392906", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"767f8569c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99", Pod:"calico-kube-controllers-767f8569c7-m5z44", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4402d01bb21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.475 [INFO][5526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.475 [INFO][5526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" iface="eth0" netns="" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.475 [INFO][5526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.475 [INFO][5526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.505 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.505 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.505 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.513 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.513 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.515 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.523775 containerd[1472]: 2025-09-13 00:07:43.519 [INFO][5526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.524850 containerd[1472]: time="2025-09-13T00:07:43.523747407Z" level=info msg="TearDown network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" successfully" Sep 13 00:07:43.524918 containerd[1472]: time="2025-09-13T00:07:43.524851564Z" level=info msg="StopPodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" returns successfully" Sep 13 00:07:43.525356 containerd[1472]: time="2025-09-13T00:07:43.525335870Z" level=info msg="RemovePodSandbox for \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\"" Sep 13 00:07:43.525405 containerd[1472]: time="2025-09-13T00:07:43.525367363Z" level=info msg="Forcibly stopping sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\"" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.570 [WARNING][5548] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0", GenerateName:"calico-kube-controllers-767f8569c7-", Namespace:"calico-system", SelfLink:"", UID:"86461376-5af6-4498-96f8-f26687392906", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"767f8569c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"383529c566ace0fa2e90b3697a6a3702d242c16c7266104342660fb175f87b99", Pod:"calico-kube-controllers-767f8569c7-m5z44", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4402d01bb21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.570 [INFO][5548] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.570 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" iface="eth0" netns="" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.570 [INFO][5548] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.570 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.601 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.602 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.602 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.610 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.610 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" HandleID="k8s-pod-network.f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--kube--controllers--767f8569c7--m5z44-eth0" Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.612 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.618216 containerd[1472]: 2025-09-13 00:07:43.615 [INFO][5548] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d" Sep 13 00:07:43.619197 containerd[1472]: time="2025-09-13T00:07:43.618277846Z" level=info msg="TearDown network for sandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" successfully" Sep 13 00:07:43.621063 containerd[1472]: time="2025-09-13T00:07:43.621024722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:43.621220 containerd[1472]: time="2025-09-13T00:07:43.621113623Z" level=info msg="RemovePodSandbox \"f3d627942c8077758009adc395157da577802c4d0bad790ebd5752e510746d2d\" returns successfully" Sep 13 00:07:43.621791 containerd[1472]: time="2025-09-13T00:07:43.621768340Z" level=info msg="StopPodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\"" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.667 [WARNING][5569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"557dbfe6-fe21-439e-9b11-49c1ef45886a", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f", Pod:"calico-apiserver-ccb5685db-bjcrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02e5979b869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.667 [INFO][5569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.667 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" iface="eth0" netns="" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.667 [INFO][5569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.667 [INFO][5569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.706 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.706 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.706 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.714 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.714 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.717 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.722898 containerd[1472]: 2025-09-13 00:07:43.719 [INFO][5569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.724299 containerd[1472]: time="2025-09-13T00:07:43.722953489Z" level=info msg="TearDown network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" successfully" Sep 13 00:07:43.724299 containerd[1472]: time="2025-09-13T00:07:43.722989485Z" level=info msg="StopPodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" returns successfully" Sep 13 00:07:43.724299 containerd[1472]: time="2025-09-13T00:07:43.724210482Z" level=info msg="RemovePodSandbox for \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\"" Sep 13 00:07:43.724299 containerd[1472]: time="2025-09-13T00:07:43.724260354Z" level=info msg="Forcibly stopping sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\"" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.769 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0", GenerateName:"calico-apiserver-ccb5685db-", Namespace:"calico-apiserver", SelfLink:"", UID:"557dbfe6-fe21-439e-9b11-49c1ef45886a", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 6, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccb5685db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"d1a614413eb932ab13a00d6f594ca0f15742ae56500e8489e81c06b00185209f", Pod:"calico-apiserver-ccb5685db-bjcrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02e5979b869", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.770 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.770 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" iface="eth0" netns="" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.770 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.770 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.799 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.800 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.800 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.810 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.811 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" HandleID="k8s-pod-network.c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Workload="ci--4081.3.5--n--3ba90871da-k8s-calico--apiserver--ccb5685db--bjcrf-eth0" Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.813 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.821469 containerd[1472]: 2025-09-13 00:07:43.816 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def" Sep 13 00:07:43.821469 containerd[1472]: time="2025-09-13T00:07:43.821338692Z" level=info msg="TearDown network for sandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" successfully" Sep 13 00:07:43.828190 containerd[1472]: time="2025-09-13T00:07:43.826916192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:43.828190 containerd[1472]: time="2025-09-13T00:07:43.827003705Z" level=info msg="RemovePodSandbox \"c508b0bafbc3c1cd88fc3b865ec4e08dee3daf5a875dc0c6f45c13f11dcf7def\" returns successfully" Sep 13 00:07:43.829501 containerd[1472]: time="2025-09-13T00:07:43.829178217Z" level=info msg="StopPodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\"" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.905 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03", Pod:"goldmane-54d579b49d-rm7dn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf2b7ad19d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.905 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.905 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" iface="eth0" netns="" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.905 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.905 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.949 [INFO][5619] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.949 [INFO][5619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.949 [INFO][5619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.962 [WARNING][5619] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.962 [INFO][5619] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.969 [INFO][5619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:43.975633 containerd[1472]: 2025-09-13 00:07:43.972 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:43.977115 containerd[1472]: time="2025-09-13T00:07:43.976716393Z" level=info msg="TearDown network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" successfully" Sep 13 00:07:43.977115 containerd[1472]: time="2025-09-13T00:07:43.976779910Z" level=info msg="StopPodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" returns successfully" Sep 13 00:07:43.977625 containerd[1472]: time="2025-09-13T00:07:43.977534170Z" level=info msg="RemovePodSandbox for \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\"" Sep 13 00:07:43.977625 containerd[1472]: time="2025-09-13T00:07:43.977572528Z" level=info msg="Forcibly stopping sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\"" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.019 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b0ed41cd-297f-49fe-95fb-ff8bf9f215d9", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-3ba90871da", ContainerID:"8839458ad9e8376eaf1dfd6d91dfdee5fcb5d7f165df4febb1d28b67044fee03", Pod:"goldmane-54d579b49d-rm7dn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf2b7ad19d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.019 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.019 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" iface="eth0" netns="" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.019 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.019 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.051 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.051 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.051 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.060 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.060 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" HandleID="k8s-pod-network.6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Workload="ci--4081.3.5--n--3ba90871da-k8s-goldmane--54d579b49d--rm7dn-eth0" Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.062 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:07:44.068489 containerd[1472]: 2025-09-13 00:07:44.065 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9" Sep 13 00:07:44.070274 containerd[1472]: time="2025-09-13T00:07:44.068597384Z" level=info msg="TearDown network for sandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" successfully" Sep 13 00:07:44.074567 containerd[1472]: time="2025-09-13T00:07:44.073570086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:07:44.074567 containerd[1472]: time="2025-09-13T00:07:44.073749753Z" level=info msg="RemovePodSandbox \"6f84cac8bee624047be54ee683d06b569f09df43ff87c58b37c2b34d892fb7a9\" returns successfully" Sep 13 00:07:45.734653 systemd[1]: Started sshd@9-143.198.49.51:22-139.178.68.195:44702.service - OpenSSH per-connection server daemon (139.178.68.195:44702). Sep 13 00:07:45.867220 sshd[5647]: Accepted publickey for core from 139.178.68.195 port 44702 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:45.869094 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:45.876621 systemd-logind[1445]: New session 10 of user core. Sep 13 00:07:45.883989 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:07:46.414984 sshd[5647]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:46.426326 systemd[1]: sshd@9-143.198.49.51:22-139.178.68.195:44702.service: Deactivated successfully. Sep 13 00:07:46.428985 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:07:46.432559 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:07:46.439252 systemd[1]: Started sshd@10-143.198.49.51:22-139.178.68.195:44714.service - OpenSSH per-connection server daemon (139.178.68.195:44714). Sep 13 00:07:46.440101 systemd-logind[1445]: Removed session 10. Sep 13 00:07:46.502565 sshd[5661]: Accepted publickey for core from 139.178.68.195 port 44714 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:46.504477 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:46.510478 systemd-logind[1445]: New session 11 of user core. Sep 13 00:07:46.513859 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:07:46.757240 sshd[5661]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:46.772842 systemd[1]: sshd@10-143.198.49.51:22-139.178.68.195:44714.service: Deactivated successfully. Sep 13 00:07:46.776513 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:07:46.781339 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:07:46.788036 systemd[1]: Started sshd@11-143.198.49.51:22-139.178.68.195:44724.service - OpenSSH per-connection server daemon (139.178.68.195:44724). Sep 13 00:07:46.790103 systemd-logind[1445]: Removed session 11. Sep 13 00:07:46.844740 sshd[5672]: Accepted publickey for core from 139.178.68.195 port 44724 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:46.846965 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:46.854841 systemd-logind[1445]: New session 12 of user core. Sep 13 00:07:46.859838 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:07:47.014629 sshd[5672]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:47.022278 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:07:47.024407 systemd[1]: sshd@11-143.198.49.51:22-139.178.68.195:44724.service: Deactivated successfully. Sep 13 00:07:47.027272 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:07:47.028619 systemd-logind[1445]: Removed session 12. Sep 13 00:07:52.037476 systemd[1]: Started sshd@12-143.198.49.51:22-139.178.68.195:47560.service - OpenSSH per-connection server daemon (139.178.68.195:47560). Sep 13 00:07:52.190741 sshd[5714]: Accepted publickey for core from 139.178.68.195 port 47560 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:52.192608 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:52.199926 systemd-logind[1445]: New session 13 of user core. Sep 13 00:07:52.204919 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:07:52.474260 sshd[5714]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:52.481339 systemd[1]: sshd@12-143.198.49.51:22-139.178.68.195:47560.service: Deactivated successfully. Sep 13 00:07:52.483598 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:07:52.485743 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:07:52.487337 systemd-logind[1445]: Removed session 13. Sep 13 00:07:56.473074 systemd[1]: Started sshd@13-143.198.49.51:22-172.236.228.245:39274.service - OpenSSH per-connection server daemon (172.236.228.245:39274). Sep 13 00:07:57.391646 sshd[5732]: Connection closed by 172.236.228.245 port 39274 [preauth] Sep 13 00:07:57.394352 systemd[1]: sshd@13-143.198.49.51:22-172.236.228.245:39274.service: Deactivated successfully. Sep 13 00:07:57.433038 systemd[1]: Started sshd@14-143.198.49.51:22-172.236.228.245:39284.service - OpenSSH per-connection server daemon (172.236.228.245:39284). Sep 13 00:07:57.497794 systemd[1]: Started sshd@15-143.198.49.51:22-139.178.68.195:47574.service - OpenSSH per-connection server daemon (139.178.68.195:47574). Sep 13 00:07:57.556769 sshd[5739]: Accepted publickey for core from 139.178.68.195 port 47574 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:07:57.559459 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:57.564977 systemd-logind[1445]: New session 14 of user core. Sep 13 00:07:57.574978 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:07:57.746138 sshd[5739]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:57.751363 systemd[1]: sshd@15-143.198.49.51:22-139.178.68.195:47574.service: Deactivated successfully. Sep 13 00:07:57.754911 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:07:57.756055 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:07:57.757744 systemd-logind[1445]: Removed session 14. Sep 13 00:07:58.014620 kubelet[2502]: E0913 00:07:58.014496 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:58.537953 sshd[5737]: Connection closed by 172.236.228.245 port 39284 [preauth] Sep 13 00:07:58.542109 systemd[1]: sshd@14-143.198.49.51:22-172.236.228.245:39284.service: Deactivated successfully. Sep 13 00:07:58.582010 systemd[1]: Started sshd@16-143.198.49.51:22-172.236.228.245:39296.service - OpenSSH per-connection server daemon (172.236.228.245:39296). Sep 13 00:07:59.014822 kubelet[2502]: E0913 00:07:59.014122 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:07:59.572389 sshd[5757]: Connection closed by 172.236.228.245 port 39296 [preauth] Sep 13 00:07:59.573258 systemd[1]: sshd@16-143.198.49.51:22-172.236.228.245:39296.service: Deactivated successfully. Sep 13 00:08:02.774983 systemd[1]: Started sshd@17-143.198.49.51:22-139.178.68.195:51728.service - OpenSSH per-connection server daemon (139.178.68.195:51728). Sep 13 00:08:02.899217 sshd[5768]: Accepted publickey for core from 139.178.68.195 port 51728 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:02.901408 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:02.912287 systemd-logind[1445]: New session 15 of user core. Sep 13 00:08:02.916897 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:08:03.536262 sshd[5768]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:03.546456 systemd[1]: sshd@17-143.198.49.51:22-139.178.68.195:51728.service: Deactivated successfully. Sep 13 00:08:03.550711 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:08:03.552130 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:08:03.554250 systemd-logind[1445]: Removed session 15. Sep 13 00:08:04.899380 kubelet[2502]: I0913 00:08:04.895622 2502 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:08:08.013656 kubelet[2502]: E0913 00:08:08.013616 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:08:08.015569 kubelet[2502]: E0913 00:08:08.015315 2502 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:08:08.549607 systemd[1]: Started sshd@18-143.198.49.51:22-139.178.68.195:51744.service - OpenSSH per-connection server daemon (139.178.68.195:51744). Sep 13 00:08:08.722275 systemd[1]: run-containerd-runc-k8s.io-6cc9d07734e86f8f9cae47a5a99e04953b5d6d90237fd07c096e6aabbf85d4cc-runc.urh6V4.mount: Deactivated successfully. Sep 13 00:08:08.724758 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 51744 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:08.728773 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:08.735221 systemd-logind[1445]: New session 16 of user core. Sep 13 00:08:08.741952 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:08:09.277751 sshd[5785]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:09.289105 systemd[1]: sshd@18-143.198.49.51:22-139.178.68.195:51744.service: Deactivated successfully. Sep 13 00:08:09.292140 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:08:09.294864 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:08:09.300725 systemd[1]: Started sshd@19-143.198.49.51:22-139.178.68.195:51756.service - OpenSSH per-connection server daemon (139.178.68.195:51756). Sep 13 00:08:09.308872 systemd-logind[1445]: Removed session 16. Sep 13 00:08:09.363090 sshd[5820]: Accepted publickey for core from 139.178.68.195 port 51756 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:09.364787 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:09.369742 systemd-logind[1445]: New session 17 of user core. Sep 13 00:08:09.376915 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:08:09.809366 sshd[5820]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:09.818586 systemd[1]: sshd@19-143.198.49.51:22-139.178.68.195:51756.service: Deactivated successfully. Sep 13 00:08:09.823136 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:08:09.825012 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:08:09.833082 systemd[1]: Started sshd@20-143.198.49.51:22-139.178.68.195:51760.service - OpenSSH per-connection server daemon (139.178.68.195:51760). Sep 13 00:08:09.834840 systemd-logind[1445]: Removed session 17. Sep 13 00:08:09.884170 sshd[5831]: Accepted publickey for core from 139.178.68.195 port 51760 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:09.886184 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:09.893165 systemd-logind[1445]: New session 18 of user core. Sep 13 00:08:09.897902 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:08:10.674392 sshd[5831]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:10.686242 systemd[1]: sshd@20-143.198.49.51:22-139.178.68.195:51760.service: Deactivated successfully. Sep 13 00:08:10.691635 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:08:10.699783 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:08:10.714147 systemd[1]: Started sshd@21-143.198.49.51:22-139.178.68.195:47446.service - OpenSSH per-connection server daemon (139.178.68.195:47446). Sep 13 00:08:10.722785 systemd-logind[1445]: Removed session 18. Sep 13 00:08:10.799466 sshd[5850]: Accepted publickey for core from 139.178.68.195 port 47446 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:10.801382 sshd[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:10.807293 systemd-logind[1445]: New session 19 of user core. Sep 13 00:08:10.812951 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:08:11.530375 sshd[5850]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:11.547054 systemd[1]: Started sshd@22-143.198.49.51:22-139.178.68.195:47448.service - OpenSSH per-connection server daemon (139.178.68.195:47448). Sep 13 00:08:11.548206 systemd[1]: sshd@21-143.198.49.51:22-139.178.68.195:47446.service: Deactivated successfully. Sep 13 00:08:11.554791 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:08:11.560655 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:08:11.565646 systemd-logind[1445]: Removed session 19. Sep 13 00:08:11.633330 sshd[5877]: Accepted publickey for core from 139.178.68.195 port 47448 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:11.639245 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:11.645332 systemd-logind[1445]: New session 20 of user core. Sep 13 00:08:11.648901 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:08:11.824572 sshd[5877]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:11.829352 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:08:11.829782 systemd[1]: sshd@22-143.198.49.51:22-139.178.68.195:47448.service: Deactivated successfully. Sep 13 00:08:11.832063 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:08:11.833621 systemd-logind[1445]: Removed session 20. Sep 13 00:08:16.842044 systemd[1]: Started sshd@23-143.198.49.51:22-139.178.68.195:47460.service - OpenSSH per-connection server daemon (139.178.68.195:47460). Sep 13 00:08:16.989063 sshd[5914]: Accepted publickey for core from 139.178.68.195 port 47460 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:16.991890 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:16.998515 systemd-logind[1445]: New session 21 of user core. Sep 13 00:08:17.006974 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:08:17.689947 sshd[5914]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:17.694804 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:08:17.695171 systemd[1]: sshd@23-143.198.49.51:22-139.178.68.195:47460.service: Deactivated successfully. Sep 13 00:08:17.698077 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:08:17.699231 systemd-logind[1445]: Removed session 21. Sep 13 00:08:22.715039 systemd[1]: Started sshd@24-143.198.49.51:22-139.178.68.195:34520.service - OpenSSH per-connection server daemon (139.178.68.195:34520). Sep 13 00:08:22.842829 sshd[5967]: Accepted publickey for core from 139.178.68.195 port 34520 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:08:22.845258 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:22.850975 systemd-logind[1445]: New session 22 of user core. Sep 13 00:08:22.858913 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:08:23.684836 sshd[5967]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:23.695837 systemd[1]: sshd@24-143.198.49.51:22-139.178.68.195:34520.service: Deactivated successfully. Sep 13 00:08:23.696051 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:08:23.701659 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:08:23.704530 systemd-logind[1445]: Removed session 22.