Aug 13 07:07:44.947287 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:07:44.947320 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:07:44.947340 kernel: BIOS-provided physical RAM map: Aug 13 07:07:44.947350 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:07:44.947360 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:07:44.947372 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:07:44.947381 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:07:44.947388 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:07:44.947394 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:07:44.947404 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:07:44.947411 kernel: NX (Execute Disable) protection: active Aug 13 07:07:44.947418 kernel: APIC: Static calls initialized Aug 13 07:07:44.947428 kernel: SMBIOS 2.8 present. Aug 13 07:07:44.947436 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:07:44.947445 kernel: Hypervisor detected: KVM Aug 13 07:07:44.947455 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:07:44.947478 kernel: kvm-clock: using sched offset of 2944068007 cycles Aug 13 07:07:44.947491 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:07:44.947500 kernel: tsc: Detected 2494.138 MHz processor Aug 13 07:07:44.947508 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:07:44.947519 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:07:44.947533 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:07:44.947545 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:07:44.947556 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:07:44.947574 kernel: ACPI: Early table checksum verification disabled Aug 13 07:07:44.947585 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:07:44.947596 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947607 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947617 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947628 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:07:44.947639 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947650 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947661 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947676 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:07:44.947688 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:07:44.947700 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:07:44.947710 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:07:44.947717 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:07:44.947725 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:07:44.947733 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:07:44.947748 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:07:44.947756 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:07:44.947764 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:07:44.947772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:07:44.947791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:07:44.947803 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:07:44.947812 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:07:44.947824 kernel: Zone ranges: Aug 13 07:07:44.947832 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:07:44.947840 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:07:44.947848 kernel: Normal empty Aug 13 07:07:44.947857 kernel: Movable zone start for each node Aug 13 07:07:44.947865 kernel: Early memory node ranges Aug 13 07:07:44.947873 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:07:44.947881 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:07:44.947889 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:07:44.947901 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:07:44.947909 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:07:44.947919 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:07:44.947927 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:07:44.947935 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:07:44.947944 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:07:44.947953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:07:44.947966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:07:44.947979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:07:44.947994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:07:44.948005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:07:44.948017 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:07:44.948030 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:07:44.948044 kernel: TSC deadline timer available Aug 13 07:07:44.948057 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:07:44.948071 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:07:44.948083 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:07:44.948099 kernel: Booting paravirtualized kernel on KVM Aug 13 07:07:44.948110 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:07:44.948127 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:07:44.948139 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:07:44.948153 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:07:44.948166 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:07:44.948174 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:07:44.948184 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:07:44.948193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:07:44.948201 kernel: random: crng init done Aug 13 07:07:44.948213 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:07:44.948221 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:07:44.948229 kernel: Fallback order for Node 0: 0 Aug 13 07:07:44.948238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:07:44.948246 kernel: Policy zone: DMA32 Aug 13 07:07:44.948254 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:07:44.948263 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:07:44.948271 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:07:44.948282 kernel: Kernel/User page tables isolation: enabled Aug 13 07:07:44.948290 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:07:44.948299 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:07:44.948307 kernel: Dynamic Preempt: voluntary Aug 13 07:07:44.948315 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:07:44.948328 kernel: rcu: RCU event tracing is enabled. Aug 13 07:07:44.948337 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:07:44.948345 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:07:44.948354 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:07:44.948362 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:07:44.948373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:07:44.948381 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:07:44.948389 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:07:44.948398 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:07:44.948409 kernel: Console: colour VGA+ 80x25 Aug 13 07:07:44.948417 kernel: printk: console [tty0] enabled Aug 13 07:07:44.948429 kernel: printk: console [ttyS0] enabled Aug 13 07:07:44.948440 kernel: ACPI: Core revision 20230628 Aug 13 07:07:44.948452 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:07:44.948467 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:07:44.948478 kernel: x2apic enabled Aug 13 07:07:44.948490 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:07:44.948504 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:07:44.948517 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 13 07:07:44.948529 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Aug 13 07:07:44.948542 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:07:44.948554 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:07:44.948583 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:07:44.948595 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:07:44.948609 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:07:44.948627 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:07:44.948641 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:07:44.948655 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:07:44.948669 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:07:44.948683 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:07:44.948697 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:07:44.948720 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:07:44.948735 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:07:44.948749 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:07:44.948763 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:07:44.948778 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:07:44.948805 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:07:44.948818 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:07:44.948832 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:07:44.948850 kernel: landlock: Up and running. Aug 13 07:07:44.948864 kernel: SELinux: Initializing. Aug 13 07:07:44.948878 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:07:44.948892 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:07:44.948906 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:07:44.948920 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:07:44.948934 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:07:44.948948 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:07:44.948961 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:07:44.948979 kernel: signal: max sigframe size: 1776 Aug 13 07:07:44.948992 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:07:44.949006 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:07:44.949019 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:07:44.949032 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:07:44.949045 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:07:44.949057 kernel: .... node #0, CPUs: #1 Aug 13 07:07:44.949073 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:07:44.949092 kernel: smpboot: Max logical packages: 1 Aug 13 07:07:44.949112 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Aug 13 07:07:44.949124 kernel: devtmpfs: initialized Aug 13 07:07:44.949133 kernel: x86/mm: Memory block size: 128MB Aug 13 07:07:44.949142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:07:44.949151 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:07:44.949160 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:07:44.949169 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:07:44.949178 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:07:44.949186 kernel: audit: type=2000 audit(1755068863.386:1): state=initialized audit_enabled=0 res=1 Aug 13 07:07:44.949199 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:07:44.949208 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:07:44.949217 kernel: cpuidle: using governor menu Aug 13 07:07:44.949226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:07:44.949235 kernel: dca service started, version 1.12.1 Aug 13 07:07:44.949244 kernel: PCI: Using configuration type 1 for base access Aug 13 07:07:44.949252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:07:44.949262 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:07:44.949278 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:07:44.949295 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:07:44.949307 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:07:44.949320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:07:44.949333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:07:44.949344 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:07:44.949356 kernel: ACPI: Interpreter enabled Aug 13 07:07:44.949369 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:07:44.949382 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:07:44.949397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:07:44.949417 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:07:44.949430 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:07:44.949443 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:07:44.949708 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:07:44.949880 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:07:44.950029 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:07:44.950051 kernel: acpiphp: Slot [3] registered Aug 13 07:07:44.950074 kernel: acpiphp: Slot [4] registered Aug 13 07:07:44.950084 kernel: acpiphp: Slot [5] registered Aug 13 07:07:44.950093 kernel: acpiphp: Slot [6] registered Aug 13 07:07:44.950102 kernel: acpiphp: Slot [7] registered Aug 13 07:07:44.950111 kernel: acpiphp: Slot [8] registered Aug 13 07:07:44.950119 kernel: acpiphp: Slot [9] registered Aug 13 07:07:44.950128 kernel: acpiphp: Slot [10] registered Aug 13 07:07:44.950138 kernel: acpiphp: Slot [11] registered Aug 13 07:07:44.950147 kernel: acpiphp: Slot [12] registered Aug 13 07:07:44.950155 kernel: acpiphp: Slot [13] registered Aug 13 07:07:44.950167 kernel: acpiphp: Slot [14] registered Aug 13 07:07:44.950176 kernel: acpiphp: Slot [15] registered Aug 13 07:07:44.950184 kernel: acpiphp: Slot [16] registered Aug 13 07:07:44.950193 kernel: acpiphp: Slot [17] registered Aug 13 07:07:44.950202 kernel: acpiphp: Slot [18] registered Aug 13 07:07:44.950211 kernel: acpiphp: Slot [19] registered Aug 13 07:07:44.950220 kernel: acpiphp: Slot [20] registered Aug 13 07:07:44.950229 kernel: acpiphp: Slot [21] registered Aug 13 07:07:44.950237 kernel: acpiphp: Slot [22] registered Aug 13 07:07:44.950249 kernel: acpiphp: Slot [23] registered Aug 13 07:07:44.950257 kernel: acpiphp: Slot [24] registered Aug 13 07:07:44.950267 kernel: acpiphp: Slot [25] registered Aug 13 07:07:44.950276 kernel: acpiphp: Slot [26] registered Aug 13 07:07:44.950285 kernel: acpiphp: Slot [27] registered Aug 13 07:07:44.950293 kernel: acpiphp: Slot [28] registered Aug 13 07:07:44.950302 kernel: acpiphp: Slot [29] registered Aug 13 07:07:44.950310 kernel: acpiphp: Slot [30] registered Aug 13 07:07:44.950319 kernel: acpiphp: Slot [31] registered Aug 13 07:07:44.950328 kernel: PCI host bridge to bus 0000:00 Aug 13 07:07:44.950466 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:07:44.950567 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:07:44.950678 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:07:44.950822 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:07:44.950911 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:07:44.950998 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:07:44.951135 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:07:44.951282 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:07:44.951505 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:07:44.951737 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:07:44.951970 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:07:44.952150 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:07:44.952253 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:07:44.952356 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:07:44.952483 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:07:44.952578 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:07:44.952681 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:07:44.952774 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:07:44.952887 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:07:44.952998 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:07:44.953096 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:07:44.953193 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:07:44.953289 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:07:44.953383 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:07:44.953478 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:07:44.953586 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:07:44.953686 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:07:44.953819 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:07:44.953919 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:07:44.954027 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:07:44.954176 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:07:44.954282 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:07:44.954377 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:07:44.954493 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:07:44.954621 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:07:44.954720 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:07:44.954882 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:07:44.954996 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:07:44.955093 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:07:44.955191 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:07:44.955292 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:07:44.955397 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:07:44.955520 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:07:44.955651 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:07:44.955793 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:07:44.955922 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:07:44.956027 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:07:44.956122 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:07:44.956134 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:07:44.956143 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:07:44.956152 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:07:44.956161 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:07:44.956170 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:07:44.956179 kernel: iommu: Default domain type: Translated Aug 13 07:07:44.956192 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:07:44.956201 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:07:44.956210 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:07:44.956219 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:07:44.956228 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:07:44.956327 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:07:44.956428 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:07:44.956536 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:07:44.956558 kernel: vgaarb: loaded Aug 13 07:07:44.956575 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:07:44.956591 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:07:44.956609 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:07:44.956625 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:07:44.956643 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:07:44.956661 kernel: pnp: PnP ACPI init Aug 13 07:07:44.956678 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:07:44.956695 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:07:44.956715 kernel: NET: Registered PF_INET protocol family Aug 13 07:07:44.956732 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:07:44.956749 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:07:44.956765 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:07:44.956792 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:07:44.956809 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:07:44.956826 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:07:44.956843 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:07:44.956860 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:07:44.956880 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:07:44.956897 kernel: NET: Registered PF_XDP protocol family Aug 13 07:07:44.957043 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:07:44.957175 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:07:44.957305 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:07:44.957434 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:07:44.957559 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:07:44.957666 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:07:44.957773 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:07:44.957871 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:07:44.958018 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 31552 usecs Aug 13 07:07:44.958032 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:07:44.958042 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:07:44.958052 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Aug 13 07:07:44.958061 kernel: Initialise system trusted keyrings Aug 13 07:07:44.958070 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:07:44.958079 kernel: Key type asymmetric registered Aug 13 07:07:44.958093 kernel: Asymmetric key parser 'x509' registered Aug 13 07:07:44.958102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:07:44.958111 kernel: io scheduler mq-deadline registered Aug 13 07:07:44.958120 kernel: io scheduler kyber registered Aug 13 07:07:44.958128 kernel: io scheduler bfq registered Aug 13 07:07:44.958137 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:07:44.958146 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:07:44.958155 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:07:44.958165 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:07:44.958177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:07:44.958185 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:07:44.958194 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:07:44.958204 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:07:44.958213 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:07:44.958328 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:07:44.958342 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:07:44.958430 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:07:44.958556 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:07:44 UTC (1755068864) Aug 13 07:07:44.958647 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:07:44.958659 kernel: intel_pstate: CPU model not supported Aug 13 07:07:44.958668 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:07:44.958677 kernel: Segment Routing with IPv6 Aug 13 07:07:44.958686 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:07:44.958696 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:07:44.958704 kernel: Key type dns_resolver registered Aug 13 07:07:44.958713 kernel: IPI shorthand broadcast: enabled Aug 13 07:07:44.958726 kernel: sched_clock: Marking stable (1014003339, 85505695)->(1212299777, -112790743) Aug 13 07:07:44.958735 kernel: registered taskstats version 1 Aug 13 07:07:44.958743 kernel: Loading compiled-in X.509 certificates Aug 13 07:07:44.958752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:07:44.958761 kernel: Key type .fscrypt registered Aug 13 07:07:44.958770 kernel: Key type fscrypt-provisioning registered Aug 13 07:07:44.958779 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:07:44.959519 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:07:44.959534 kernel: ima: No architecture policies found Aug 13 07:07:44.959543 kernel: clk: Disabling unused clocks Aug 13 07:07:44.959552 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:07:44.959561 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:07:44.959571 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:07:44.959598 kernel: Run /init as init process Aug 13 07:07:44.959610 kernel: with arguments: Aug 13 07:07:44.959620 kernel: /init Aug 13 07:07:44.959629 kernel: with environment: Aug 13 07:07:44.959641 kernel: HOME=/ Aug 13 07:07:44.959651 kernel: TERM=linux Aug 13 07:07:44.959660 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:07:44.959673 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:07:44.959685 systemd[1]: Detected virtualization kvm. Aug 13 07:07:44.959695 systemd[1]: Detected architecture x86-64. Aug 13 07:07:44.959705 systemd[1]: Running in initrd. Aug 13 07:07:44.959714 systemd[1]: No hostname configured, using default hostname. Aug 13 07:07:44.959727 systemd[1]: Hostname set to . Aug 13 07:07:44.961924 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:07:44.961943 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:07:44.961954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:07:44.961965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:07:44.961976 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:07:44.961985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:07:44.961995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:07:44.962011 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:07:44.962022 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:07:44.962032 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:07:44.962043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:07:44.962053 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:07:44.962063 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:07:44.962073 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:07:44.962085 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:07:44.962095 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:07:44.962109 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:07:44.962119 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:07:44.962129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:07:44.962141 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:07:44.962151 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:07:44.962161 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:07:44.962171 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:07:44.962181 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:07:44.962191 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:07:44.962201 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:07:44.962210 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:07:44.962220 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:07:44.962234 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:07:44.962244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:07:44.962253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:07:44.962263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:07:44.962303 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:07:44.962330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:07:44.962341 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:07:44.962352 systemd-journald[183]: Journal started Aug 13 07:07:44.962377 systemd-journald[183]: Runtime Journal (/run/log/journal/f13f3f0e1b6044a5919975dda12c9e98) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:07:44.973351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:07:44.978243 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:07:45.015714 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:07:45.015750 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:07:45.015776 kernel: Bridge firewalling registered Aug 13 07:07:45.006416 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:07:45.016666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:07:45.017308 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:07:45.021514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:45.029055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:07:45.030990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:07:45.035988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:07:45.040163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:07:45.053617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:07:45.063760 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:07:45.064591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:07:45.065344 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:07:45.072022 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:07:45.075970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:07:45.091722 dracut-cmdline[218]: dracut-dracut-053 Aug 13 07:07:45.096581 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:07:45.113252 systemd-resolved[220]: Positive Trust Anchors: Aug 13 07:07:45.113276 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:07:45.113325 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:07:45.117040 systemd-resolved[220]: Defaulting to hostname 'linux'. Aug 13 07:07:45.118567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:07:45.120980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:07:45.203830 kernel: SCSI subsystem initialized Aug 13 07:07:45.212812 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:07:45.228815 kernel: iscsi: registered transport (tcp) Aug 13 07:07:45.258920 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:07:45.259027 kernel: QLogic iSCSI HBA Driver Aug 13 07:07:45.321019 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:07:45.327092 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:07:45.359891 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:07:45.360122 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:07:45.360151 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:07:45.409855 kernel: raid6: avx2x4 gen() 14822 MB/s Aug 13 07:07:45.426841 kernel: raid6: avx2x2 gen() 15508 MB/s Aug 13 07:07:45.443944 kernel: raid6: avx2x1 gen() 11808 MB/s Aug 13 07:07:45.444071 kernel: raid6: using algorithm avx2x2 gen() 15508 MB/s Aug 13 07:07:45.462043 kernel: raid6: .... xor() 18873 MB/s, rmw enabled Aug 13 07:07:45.462155 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:07:45.485851 kernel: xor: automatically using best checksumming function avx Aug 13 07:07:45.657842 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:07:45.671754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:07:45.678186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:07:45.706878 systemd-udevd[403]: Using default interface naming scheme 'v255'. Aug 13 07:07:45.713549 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:07:45.723118 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:07:45.744612 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Aug 13 07:07:45.790381 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:07:45.797036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:07:45.865262 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:07:45.870976 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:07:45.898389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:07:45.901044 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:07:45.901503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:07:45.901842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:07:45.910670 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:07:45.934720 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:07:45.955994 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:07:45.962031 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:07:45.971871 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:07:45.984176 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:07:45.984258 kernel: GPT:9289727 != 125829119 Aug 13 07:07:45.984278 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:07:45.984856 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:07:45.984894 kernel: GPT:9289727 != 125829119 Aug 13 07:07:45.987206 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:07:45.987287 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:07:46.023813 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:07:46.024799 kernel: libata version 3.00 loaded. Aug 13 07:07:46.026846 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:07:46.029734 kernel: AES CTR mode by8 optimization enabled Aug 13 07:07:46.029819 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 07:07:46.030490 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:07:46.030626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:07:46.031870 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:07:46.033259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:07:46.033436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:46.036970 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:07:46.034536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:07:46.042038 kernel: ACPI: bus type USB registered Aug 13 07:07:46.042106 kernel: usbcore: registered new interface driver usbfs Aug 13 07:07:46.042121 kernel: usbcore: registered new interface driver hub Aug 13 07:07:46.042962 kernel: usbcore: registered new device driver usb Aug 13 07:07:46.044279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:07:46.066868 kernel: scsi host1: ata_piix Aug 13 07:07:46.079779 kernel: scsi host2: ata_piix Aug 13 07:07:46.084152 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:07:46.084219 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:07:46.126857 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:07:46.127167 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:07:46.127368 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:07:46.127569 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:07:46.127756 kernel: hub 1-0:1.0: USB hub found Aug 13 07:07:46.127987 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:07:46.129816 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (453) Aug 13 07:07:46.130806 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (451) Aug 13 07:07:46.143310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:07:46.151542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:46.156935 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:07:46.162338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:07:46.166882 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:07:46.167350 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:07:46.173040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:07:46.176004 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:07:46.185267 disk-uuid[532]: Primary Header is updated. Aug 13 07:07:46.185267 disk-uuid[532]: Secondary Entries is updated. Aug 13 07:07:46.185267 disk-uuid[532]: Secondary Header is updated. Aug 13 07:07:46.197829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:07:46.198502 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:07:46.203840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:07:46.212869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:07:47.210809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:07:47.211578 disk-uuid[535]: The operation has completed successfully. Aug 13 07:07:47.249295 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:07:47.249412 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:07:47.265034 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:07:47.268672 sh[564]: Success Aug 13 07:07:47.282863 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:07:47.351166 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:07:47.368951 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:07:47.371345 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:07:47.403833 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:07:47.403922 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:07:47.403945 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:07:47.403966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:07:47.404198 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:07:47.412702 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:07:47.414504 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:07:47.429373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:07:47.434075 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:07:47.442929 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:07:47.442987 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:07:47.443002 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:07:47.448824 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:07:47.463534 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:07:47.464410 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:07:47.470300 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:07:47.476196 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:07:47.654806 ignition[653]: Ignition 2.19.0 Aug 13 07:07:47.655974 ignition[653]: Stage: fetch-offline Aug 13 07:07:47.656065 ignition[653]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:47.656081 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:47.656295 ignition[653]: parsed url from cmdline: "" Aug 13 07:07:47.656302 ignition[653]: no config URL provided Aug 13 07:07:47.656310 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:07:47.656324 ignition[653]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:07:47.659974 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:07:47.656334 ignition[653]: failed to fetch config: resource requires networking Aug 13 07:07:47.662173 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:07:47.656724 ignition[653]: Ignition finished successfully Aug 13 07:07:47.671181 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:07:47.710570 systemd-networkd[753]: lo: Link UP Aug 13 07:07:47.711442 systemd-networkd[753]: lo: Gained carrier Aug 13 07:07:47.715688 systemd-networkd[753]: Enumeration completed Aug 13 07:07:47.716298 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:07:47.716566 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:07:47.716574 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:07:47.716910 systemd[1]: Reached target network.target - Network. Aug 13 07:07:47.719557 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:07:47.719563 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:07:47.720451 systemd-networkd[753]: eth0: Link UP Aug 13 07:07:47.720476 systemd-networkd[753]: eth0: Gained carrier Aug 13 07:07:47.720495 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:07:47.726407 systemd-networkd[753]: eth1: Link UP Aug 13 07:07:47.726411 systemd-networkd[753]: eth1: Gained carrier Aug 13 07:07:47.726428 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:07:47.730104 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:07:47.737884 systemd-networkd[753]: eth0: DHCPv4 address 165.232.152.216/20, gateway 165.232.144.1 acquired from 169.254.169.253 Aug 13 07:07:47.756121 ignition[755]: Ignition 2.19.0 Aug 13 07:07:47.758681 ignition[755]: Stage: fetch Aug 13 07:07:47.756935 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 Aug 13 07:07:47.759023 ignition[755]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:47.759041 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:47.760052 ignition[755]: parsed url from cmdline: "" Aug 13 07:07:47.760060 ignition[755]: no config URL provided Aug 13 07:07:47.760073 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:07:47.760090 ignition[755]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:07:47.760122 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:07:47.781763 ignition[755]: GET result: OK Aug 13 07:07:47.781906 ignition[755]: parsing config with SHA512: 462aa71549e7c6c5902872df7ad967ce2518c629522c739f3c5087466bed9a447ce18fc0ff314acdfff0bc118fb9c787ed93a9f94f728aa793e29f111270e451 Aug 13 07:07:47.788264 unknown[755]: fetched base config from "system" Aug 13 07:07:47.788291 unknown[755]: fetched base config from "system" Aug 13 07:07:47.788299 unknown[755]: fetched user config from "digitalocean" Aug 13 07:07:47.791313 ignition[755]: fetch: fetch complete Aug 13 07:07:47.791325 ignition[755]: fetch: fetch passed Aug 13 07:07:47.791411 ignition[755]: Ignition finished successfully Aug 13 07:07:47.793563 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:07:47.801043 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:07:47.822678 ignition[763]: Ignition 2.19.0 Aug 13 07:07:47.822691 ignition[763]: Stage: kargs Aug 13 07:07:47.822939 ignition[763]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:47.822953 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:47.824074 ignition[763]: kargs: kargs passed Aug 13 07:07:47.825461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:07:47.824145 ignition[763]: Ignition finished successfully Aug 13 07:07:47.831059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:07:47.851974 ignition[769]: Ignition 2.19.0 Aug 13 07:07:47.852002 ignition[769]: Stage: disks Aug 13 07:07:47.852244 ignition[769]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:47.852262 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:47.853372 ignition[769]: disks: disks passed Aug 13 07:07:47.853451 ignition[769]: Ignition finished successfully Aug 13 07:07:47.856148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:07:47.859670 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:07:47.860093 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:07:47.860767 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:07:47.861446 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:07:47.862029 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:07:47.868026 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:07:47.884843 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:07:47.888255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:07:47.893987 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:07:48.006809 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:07:48.007741 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:07:48.008696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:07:48.019086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:07:48.022335 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:07:48.025074 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:07:48.032084 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:07:48.033837 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (785) Aug 13 07:07:48.034467 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:07:48.034507 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:07:48.040940 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:07:48.041001 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:07:48.043549 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:07:48.052260 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:07:48.055885 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:07:48.066592 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:07:48.072092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:07:48.116155 coreos-metadata[788]: Aug 13 07:07:48.116 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:07:48.128210 coreos-metadata[787]: Aug 13 07:07:48.125 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:07:48.131215 coreos-metadata[788]: Aug 13 07:07:48.131 INFO Fetch successful Aug 13 07:07:48.140172 coreos-metadata[787]: Aug 13 07:07:48.140 INFO Fetch successful Aug 13 07:07:48.143600 coreos-metadata[788]: Aug 13 07:07:48.143 INFO wrote hostname ci-4081.3.5-e-55e36c071a to /sysroot/etc/hostname Aug 13 07:07:48.146992 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:07:48.148341 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:07:48.148438 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:07:48.151953 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:07:48.158680 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:07:48.164810 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:07:48.171856 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:07:48.272856 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:07:48.278925 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:07:48.280994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:07:48.294821 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:07:48.317208 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:07:48.326226 ignition[905]: INFO : Ignition 2.19.0 Aug 13 07:07:48.326226 ignition[905]: INFO : Stage: mount Aug 13 07:07:48.327218 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:48.327218 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:48.329100 ignition[905]: INFO : mount: mount passed Aug 13 07:07:48.329100 ignition[905]: INFO : Ignition finished successfully Aug 13 07:07:48.329282 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:07:48.333983 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:07:48.402088 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:07:48.409092 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:07:48.419887 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Aug 13 07:07:48.419951 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:07:48.421149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:07:48.421994 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:07:48.428839 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:07:48.431676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:07:48.455934 ignition[935]: INFO : Ignition 2.19.0 Aug 13 07:07:48.455934 ignition[935]: INFO : Stage: files Aug 13 07:07:48.457007 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:48.457007 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:48.458822 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:07:48.460175 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:07:48.460175 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:07:48.463321 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:07:48.464154 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:07:48.465068 unknown[935]: wrote ssh authorized keys file for user: core Aug 13 07:07:48.465800 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:07:48.467995 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:07:48.467995 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:07:48.467995 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:07:48.467995 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:07:48.507810 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:07:48.742589 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:07:48.742589 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:07:48.744300 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:07:48.748714 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:07:48.748714 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:07:48.748714 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:07:48.748714 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:07:48.748714 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:07:48.748714 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:07:49.014154 systemd-networkd[753]: eth0: Gained IPv6LL Aug 13 07:07:49.014550 systemd-networkd[753]: eth1: Gained IPv6LL Aug 13 07:07:49.104693 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:07:50.342435 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:07:50.344045 ignition[935]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:07:50.345474 ignition[935]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:07:50.351619 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:07:50.351619 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:07:50.351619 ignition[935]: INFO : files: files passed Aug 13 07:07:50.351619 ignition[935]: INFO : Ignition finished successfully Aug 13 07:07:50.347230 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:07:50.353098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:07:50.356889 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:07:50.362649 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:07:50.362779 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:07:50.376393 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:07:50.377529 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:07:50.378391 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:07:50.379663 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:07:50.381041 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:07:50.388062 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:07:50.423419 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:07:50.424385 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:07:50.425579 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:07:50.426093 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:07:50.427099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:07:50.436171 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:07:50.454393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:07:50.459086 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:07:50.486539 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:07:50.487187 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:07:50.488314 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:07:50.489114 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:07:50.489267 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:07:50.490389 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:07:50.491404 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:07:50.492266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:07:50.493035 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:07:50.493660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:07:50.494508 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:07:50.495261 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:07:50.496145 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:07:50.496846 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:07:50.497600 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:07:50.498246 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:07:50.498432 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:07:50.499674 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:07:50.500434 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:07:50.501197 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:07:50.501371 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:07:50.502112 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:07:50.502286 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:07:50.503965 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:07:50.504164 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:07:50.505055 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:07:50.505213 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:07:50.505866 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:07:50.506038 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:07:50.513153 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:07:50.514343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:07:50.515197 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:07:50.521041 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:07:50.521990 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:07:50.522678 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:07:50.525079 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:07:50.526959 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:07:50.534443 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:07:50.538821 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:07:50.547361 ignition[988]: INFO : Ignition 2.19.0 Aug 13 07:07:50.547361 ignition[988]: INFO : Stage: umount Aug 13 07:07:50.548676 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:07:50.548676 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:07:50.551076 ignition[988]: INFO : umount: umount passed Aug 13 07:07:50.551648 ignition[988]: INFO : Ignition finished successfully Aug 13 07:07:50.553707 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:07:50.553895 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:07:50.554502 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:07:50.554550 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:07:50.555012 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:07:50.555058 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:07:50.555395 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:07:50.555431 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:07:50.556220 systemd[1]: Stopped target network.target - Network. Aug 13 07:07:50.557345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:07:50.557432 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:07:50.559251 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:07:50.561267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:07:50.566907 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:07:50.567383 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:07:50.567807 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:07:50.568286 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:07:50.568356 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:07:50.568766 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:07:50.570066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:07:50.572589 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:07:50.572685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:07:50.573266 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:07:50.573343 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:07:50.574843 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:07:50.576000 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:07:50.578207 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:07:50.578875 systemd-networkd[753]: eth0: DHCPv6 lease lost Aug 13 07:07:50.582920 systemd-networkd[753]: eth1: DHCPv6 lease lost Aug 13 07:07:50.586137 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:07:50.586300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:07:50.588812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:07:50.588918 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:07:50.600131 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:07:50.600695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:07:50.600822 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:07:50.602854 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:07:50.608676 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:07:50.610392 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:07:50.617143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:07:50.617298 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:07:50.619430 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:07:50.619552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:07:50.624277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:07:50.624376 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:07:50.625622 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:07:50.625808 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:07:50.632450 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:07:50.633376 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:07:50.638172 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:07:50.639102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:07:50.640657 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:07:50.640759 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:07:50.641483 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:07:50.641537 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:07:50.642473 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:07:50.642550 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:07:50.643958 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:07:50.644033 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:07:50.645053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:07:50.645123 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:07:50.645996 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:07:50.646060 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:07:50.653095 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:07:50.653676 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:07:50.653794 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:07:50.654599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:07:50.654668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:50.668360 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:07:50.668499 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:07:50.670163 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:07:50.676127 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:07:50.696436 systemd[1]: Switching root. Aug 13 07:07:50.739771 systemd-journald[183]: Journal stopped Aug 13 07:07:52.120573 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:07:52.120676 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:07:52.120693 kernel: SELinux: policy capability open_perms=1 Aug 13 07:07:52.120704 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:07:52.120717 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:07:52.120729 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:07:52.120742 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:07:52.120753 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:07:52.120765 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:07:52.121051 kernel: audit: type=1403 audit(1755068870.950:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:07:52.121073 systemd[1]: Successfully loaded SELinux policy in 40.878ms. Aug 13 07:07:52.121110 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.413ms. Aug 13 07:07:52.121130 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:07:52.121145 systemd[1]: Detected virtualization kvm. Aug 13 07:07:52.121159 systemd[1]: Detected architecture x86-64. Aug 13 07:07:52.121176 systemd[1]: Detected first boot. Aug 13 07:07:52.121189 systemd[1]: Hostname set to . Aug 13 07:07:52.121201 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:07:52.121220 zram_generator::config[1048]: No configuration found. Aug 13 07:07:52.121241 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:07:52.121254 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:07:52.121267 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:07:52.121288 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:07:52.121301 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:07:52.121313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:07:52.121326 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:07:52.121343 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:07:52.121358 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:07:52.121371 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:07:52.121385 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:07:52.121397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:07:52.121409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:07:52.121422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:07:52.121436 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:07:52.121450 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:07:52.121467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:07:52.121479 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:07:52.121492 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:07:52.121505 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:07:52.121518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:07:52.121532 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:07:52.121548 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:07:52.121561 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:07:52.121574 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:07:52.121587 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:07:52.121602 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:07:52.121615 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:07:52.121629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:07:52.121643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:07:52.121656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:07:52.121669 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:07:52.121685 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:07:52.121698 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:07:52.121712 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:07:52.121725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:52.121738 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:07:52.121750 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:07:52.121763 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:07:52.121776 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:07:52.123879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:07:52.123903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:07:52.123923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:07:52.123943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:07:52.123961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:07:52.123980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:07:52.123998 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:07:52.124011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:07:52.124025 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:07:52.124044 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:07:52.124059 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:07:52.124072 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:07:52.124086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:07:52.124099 kernel: loop: module loaded Aug 13 07:07:52.124113 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:07:52.124127 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:07:52.124141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:07:52.124158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:52.124172 kernel: ACPI: bus type drm_connector registered Aug 13 07:07:52.124184 kernel: fuse: init (API version 7.39) Aug 13 07:07:52.124196 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:07:52.124209 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:07:52.124226 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:07:52.124239 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:07:52.124252 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:07:52.124270 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:07:52.124289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:07:52.124302 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:07:52.124315 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:07:52.124329 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:07:52.124342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:07:52.124356 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:07:52.124373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:07:52.124386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:07:52.124400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:07:52.124414 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:07:52.124427 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:07:52.124442 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:07:52.124462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:07:52.124475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:07:52.124488 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:07:52.124501 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:07:52.124513 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:07:52.124572 systemd-journald[1137]: Collecting audit messages is disabled. Aug 13 07:07:52.124613 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:07:52.124628 systemd-journald[1137]: Journal started Aug 13 07:07:52.124655 systemd-journald[1137]: Runtime Journal (/run/log/journal/f13f3f0e1b6044a5919975dda12c9e98) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:07:52.132825 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:07:52.136021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:07:52.154830 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:07:52.160826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:07:52.170924 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:07:52.175972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:07:52.185875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:07:52.200824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:07:52.206813 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:07:52.216633 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:07:52.217221 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:07:52.217764 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:07:52.220465 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:07:52.238105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:07:52.269105 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:07:52.282113 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:07:52.293028 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:07:52.295370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:07:52.308859 systemd-journald[1137]: Time spent on flushing to /var/log/journal/f13f3f0e1b6044a5919975dda12c9e98 is 39.913ms for 979 entries. Aug 13 07:07:52.308859 systemd-journald[1137]: System Journal (/var/log/journal/f13f3f0e1b6044a5919975dda12c9e98) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:07:52.367071 systemd-journald[1137]: Received client request to flush runtime journal. Aug 13 07:07:52.315420 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Aug 13 07:07:52.315436 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Aug 13 07:07:52.331555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:07:52.342228 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:07:52.350974 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:07:52.370528 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:07:52.395193 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:07:52.404054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:07:52.445331 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Aug 13 07:07:52.445969 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Aug 13 07:07:52.454714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:07:53.161090 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:07:53.173158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:07:53.206759 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Aug 13 07:07:53.232449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:07:53.238042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:07:53.263266 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:07:53.347152 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:07:53.347850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:53.348071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:07:53.357103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:07:53.375159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:07:53.397204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:07:53.402260 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:07:53.402365 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:07:53.402450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:53.422690 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:07:53.428093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:07:53.428432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:07:53.429591 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:07:53.429921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:07:53.457596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:07:53.458056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:07:53.492850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1230) Aug 13 07:07:53.513106 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:07:53.512936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:07:53.513013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:07:53.524836 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:07:53.564925 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:07:53.578461 systemd-networkd[1223]: lo: Link UP Aug 13 07:07:53.578473 systemd-networkd[1223]: lo: Gained carrier Aug 13 07:07:53.582622 systemd-networkd[1223]: Enumeration completed Aug 13 07:07:53.582829 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:07:53.585222 systemd-networkd[1223]: eth0: Configuring with /run/systemd/network/10-da:64:14:46:ad:a1.network. Aug 13 07:07:53.588702 systemd-networkd[1223]: eth1: Configuring with /run/systemd/network/10-7e:b4:6c:4b:68:c8.network. Aug 13 07:07:53.589323 systemd-networkd[1223]: eth0: Link UP Aug 13 07:07:53.589332 systemd-networkd[1223]: eth0: Gained carrier Aug 13 07:07:53.591190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:07:53.595172 systemd-networkd[1223]: eth1: Link UP Aug 13 07:07:53.595184 systemd-networkd[1223]: eth1: Gained carrier Aug 13 07:07:53.609480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:07:53.631820 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:07:53.698825 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:07:53.710821 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:07:53.719822 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:07:53.726829 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:07:53.726996 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:07:53.727026 kernel: [drm] features: -context_init Aug 13 07:07:53.724363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:07:53.734816 kernel: [drm] number of scanouts: 1 Aug 13 07:07:53.735037 kernel: [drm] number of cap sets: 0 Aug 13 07:07:53.738202 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:07:53.766749 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:07:53.768067 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:07:53.765573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:07:53.769258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:53.775960 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:07:53.795213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:07:53.813175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:07:53.813466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:53.871865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:07:53.940813 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:07:53.974614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:07:53.975249 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:07:53.985140 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:07:54.022919 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:07:54.061686 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:07:54.063910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:07:54.071341 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:07:54.082806 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:07:54.113311 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:07:54.114531 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:07:54.125103 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:07:54.125357 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:07:54.125427 systemd[1]: Reached target machines.target - Containers. Aug 13 07:07:54.129093 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:07:54.147930 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:07:54.151186 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:07:54.155429 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:07:54.157891 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:07:54.166168 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:07:54.176322 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:07:54.180982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:07:54.193447 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:07:54.218197 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:07:54.223052 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:07:54.227959 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:07:54.255056 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:07:54.258491 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:07:54.272160 kernel: loop0: detected capacity change from 0 to 142488 Aug 13 07:07:54.309396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:07:54.332922 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 07:07:54.389461 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 07:07:54.441898 kernel: loop3: detected capacity change from 0 to 8 Aug 13 07:07:54.471929 kernel: loop4: detected capacity change from 0 to 142488 Aug 13 07:07:54.490844 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:07:54.517848 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 07:07:54.543828 kernel: loop7: detected capacity change from 0 to 8 Aug 13 07:07:54.548705 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:07:54.549615 (sd-merge)[1311]: Merged extensions into '/usr'. Aug 13 07:07:54.557573 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:07:54.557846 systemd[1]: Reloading... Aug 13 07:07:54.751927 zram_generator::config[1339]: No configuration found. Aug 13 07:07:54.879381 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:07:54.953201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:07:55.026998 systemd[1]: Reloading finished in 468 ms. Aug 13 07:07:55.030002 systemd-networkd[1223]: eth0: Gained IPv6LL Aug 13 07:07:55.045564 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:07:55.048131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:07:55.050311 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:07:55.070111 systemd[1]: Starting ensure-sysext.service... Aug 13 07:07:55.078046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:07:55.087022 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:07:55.087043 systemd[1]: Reloading... Aug 13 07:07:55.126546 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:07:55.126972 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:07:55.129747 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:07:55.130101 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Aug 13 07:07:55.130173 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Aug 13 07:07:55.134183 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:07:55.134197 systemd-tmpfiles[1392]: Skipping /boot Aug 13 07:07:55.150898 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:07:55.150912 systemd-tmpfiles[1392]: Skipping /boot Aug 13 07:07:55.197940 zram_generator::config[1420]: No configuration found. Aug 13 07:07:55.336882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:07:55.350129 systemd-networkd[1223]: eth1: Gained IPv6LL Aug 13 07:07:55.412840 systemd[1]: Reloading finished in 325 ms. Aug 13 07:07:55.437421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:07:55.457225 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:07:55.463205 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:07:55.472098 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:07:55.487185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:07:55.505044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:07:55.514634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:55.515848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:07:55.519426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:07:55.538939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:07:55.546590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:07:55.551194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:07:55.551368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:55.557959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:07:55.558201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:07:55.572872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:07:55.573072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:07:55.577685 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:07:55.594641 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:07:55.599167 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:07:55.599387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:07:55.617294 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:07:55.629509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:55.630175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:07:55.635076 augenrules[1506]: No rules Aug 13 07:07:55.637242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:07:55.649880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:07:55.656257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:07:55.669479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:07:55.674158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:07:55.684896 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:07:55.685384 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:07:55.685512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:07:55.690161 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:07:55.693960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:07:55.694144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:07:55.696585 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:07:55.698283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:07:55.699675 systemd-resolved[1480]: Positive Trust Anchors: Aug 13 07:07:55.699690 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:07:55.699726 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:07:55.701493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:07:55.701689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:07:55.709241 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:07:55.709243 systemd-resolved[1480]: Using system hostname 'ci-4081.3.5-e-55e36c071a'. Aug 13 07:07:55.709469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:07:55.716977 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:07:55.721359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:07:55.727658 systemd[1]: Finished ensure-sysext.service. Aug 13 07:07:55.733986 systemd[1]: Reached target network.target - Network. Aug 13 07:07:55.736424 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:07:55.737116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:07:55.737683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:07:55.737880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:07:55.746120 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:07:55.843904 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:07:55.844625 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:07:55.845156 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:07:55.845576 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:07:55.847893 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:07:55.848362 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:07:55.848395 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:07:55.849222 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:07:55.850085 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:07:55.851070 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:07:55.851849 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:07:55.854771 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:07:55.858563 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:07:55.863985 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:07:55.865574 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:07:55.869109 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:07:55.869634 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:07:55.870334 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:07:55.870407 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:07:55.870443 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:07:55.880982 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:07:55.886525 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:07:55.891476 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:07:55.895942 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:07:55.908033 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:07:55.912496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:07:55.923927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:07:55.927666 jq[1540]: false Aug 13 07:07:55.928469 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:07:55.945196 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:07:55.959063 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:07:55.973052 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:07:55.973753 coreos-metadata[1538]: Aug 13 07:07:55.973 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:07:55.981383 dbus-daemon[1539]: [system] SELinux support is enabled Aug 13 07:07:55.985061 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:07:55.988954 coreos-metadata[1538]: Aug 13 07:07:55.986 INFO Fetch successful Aug 13 07:07:55.992350 extend-filesystems[1541]: Found loop4 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found loop5 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found loop6 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found loop7 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda1 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda2 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda3 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found usr Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda4 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda6 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda7 Aug 13 07:07:55.997586 extend-filesystems[1541]: Found vda9 Aug 13 07:07:55.997586 extend-filesystems[1541]: Checking size of /dev/vda9 Aug 13 07:07:55.999268 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:07:56.007844 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:07:56.022719 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:07:56.039281 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:07:56.050146 systemd-timesyncd[1532]: Contacted time server 137.110.222.27:123 (0.flatcar.pool.ntp.org). Aug 13 07:07:56.050207 systemd-timesyncd[1532]: Initial clock synchronization to Wed 2025-08-13 07:07:56.037703 UTC. Aug 13 07:07:56.063380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:07:56.079818 jq[1569]: true Aug 13 07:07:56.079697 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:07:56.080105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:07:56.091360 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:07:56.091767 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:07:56.100344 extend-filesystems[1541]: Resized partition /dev/vda9 Aug 13 07:07:56.114936 extend-filesystems[1583]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:07:56.136236 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:07:56.108256 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:07:56.136497 update_engine[1560]: I20250813 07:07:56.131548 1560 main.cc:92] Flatcar Update Engine starting Aug 13 07:07:56.108533 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:07:56.167390 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:07:56.172996 update_engine[1560]: I20250813 07:07:56.166890 1560 update_check_scheduler.cc:74] Next update check in 3m16s Aug 13 07:07:56.194283 (ntainerd)[1586]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:07:56.201744 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:07:56.214012 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:07:56.220812 tar[1581]: linux-amd64/helm Aug 13 07:07:56.221339 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:07:56.222146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:07:56.222194 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:07:56.229196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1220) Aug 13 07:07:56.223930 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:07:56.224019 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:07:56.224047 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:07:56.227241 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:07:56.234013 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:07:56.243318 jq[1585]: true Aug 13 07:07:56.304102 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:07:56.314676 systemd-logind[1556]: New seat seat0. Aug 13 07:07:56.325188 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:07:56.325188 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:07:56.325188 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:07:56.324091 systemd-logind[1556]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:07:56.345657 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Aug 13 07:07:56.345657 extend-filesystems[1541]: Found vdb Aug 13 07:07:56.324112 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:07:56.324427 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:07:56.332894 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:07:56.333222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:07:56.427997 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:07:56.430299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:07:56.448158 systemd[1]: Starting sshkeys.service... Aug 13 07:07:56.480032 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:07:56.489385 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:07:56.610419 locksmithd[1605]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:07:56.627505 coreos-metadata[1644]: Aug 13 07:07:56.626 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:07:56.655815 coreos-metadata[1644]: Aug 13 07:07:56.653 INFO Fetch successful Aug 13 07:07:56.677909 unknown[1644]: wrote ssh authorized keys file for user: core Aug 13 07:07:56.721101 update-ssh-keys[1653]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:07:56.725092 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:07:56.741409 systemd[1]: Finished sshkeys.service. Aug 13 07:07:56.775819 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:07:56.890544 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:07:56.911371 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:07:56.942225 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:07:56.943155 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:07:56.956229 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:07:56.967176 containerd[1586]: time="2025-08-13T07:07:56.967052969Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:07:57.014422 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:07:57.028966 containerd[1586]: time="2025-08-13T07:07:57.028741704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.030572 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:07:57.035051 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:07:57.037077 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:07:57.043538 containerd[1586]: time="2025-08-13T07:07:57.043455845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:07:57.043538 containerd[1586]: time="2025-08-13T07:07:57.043531184Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:07:57.043705 containerd[1586]: time="2025-08-13T07:07:57.043559850Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:07:57.043886 containerd[1586]: time="2025-08-13T07:07:57.043859689Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:07:57.043938 containerd[1586]: time="2025-08-13T07:07:57.043919212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044050 containerd[1586]: time="2025-08-13T07:07:57.044027750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044089 containerd[1586]: time="2025-08-13T07:07:57.044050811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044410 containerd[1586]: time="2025-08-13T07:07:57.044380195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044410 containerd[1586]: time="2025-08-13T07:07:57.044405337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044506 containerd[1586]: time="2025-08-13T07:07:57.044419146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044506 containerd[1586]: time="2025-08-13T07:07:57.044428899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044561 containerd[1586]: time="2025-08-13T07:07:57.044527374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.044773 containerd[1586]: time="2025-08-13T07:07:57.044752183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:07:57.048145 containerd[1586]: time="2025-08-13T07:07:57.048092479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:07:57.048727 containerd[1586]: time="2025-08-13T07:07:57.048357319Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:07:57.048727 containerd[1586]: time="2025-08-13T07:07:57.048514381Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:07:57.048727 containerd[1586]: time="2025-08-13T07:07:57.048574215Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:07:57.054759 containerd[1586]: time="2025-08-13T07:07:57.054708473Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.054994340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055022327Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055041429Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055056160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055250830Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055650609Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055810070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055826447Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055839126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055852283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055866900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055880467Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055895295Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.056814 containerd[1586]: time="2025-08-13T07:07:57.055910975Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.055925404Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.055942661Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.055959001Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.055989814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056006274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056017808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056032785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056052044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056064858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056076222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056088601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056102979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056116694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057246 containerd[1586]: time="2025-08-13T07:07:57.056127745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056138723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056151843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056170904Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056191829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056206503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056216618Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056257639Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056275953Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056287176Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056298758Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056308766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056319943Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056331476Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:07:57.057525 containerd[1586]: time="2025-08-13T07:07:57.056341135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:07:57.057828 containerd[1586]: time="2025-08-13T07:07:57.056633457Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:07:57.057828 containerd[1586]: time="2025-08-13T07:07:57.056686384Z" level=info msg="Connect containerd service" Aug 13 07:07:57.057828 containerd[1586]: time="2025-08-13T07:07:57.056725101Z" level=info msg="using legacy CRI server" Aug 13 07:07:57.057828 containerd[1586]: time="2025-08-13T07:07:57.056732745Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:07:57.058687 containerd[1586]: time="2025-08-13T07:07:57.058271705Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:07:57.059271 containerd[1586]: time="2025-08-13T07:07:57.059244877Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:07:57.060081 containerd[1586]: time="2025-08-13T07:07:57.059823576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:07:57.060081 containerd[1586]: time="2025-08-13T07:07:57.059874978Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:07:57.060081 containerd[1586]: time="2025-08-13T07:07:57.059986469Z" level=info msg="Start subscribing containerd event" Aug 13 07:07:57.060081 containerd[1586]: time="2025-08-13T07:07:57.060027801Z" level=info msg="Start recovering state" Aug 13 07:07:57.060270 containerd[1586]: time="2025-08-13T07:07:57.060257670Z" level=info msg="Start event monitor" Aug 13 07:07:57.060333 containerd[1586]: time="2025-08-13T07:07:57.060322867Z" level=info msg="Start snapshots syncer" Aug 13 07:07:57.060389 containerd[1586]: time="2025-08-13T07:07:57.060377438Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:07:57.060812 containerd[1586]: time="2025-08-13T07:07:57.060436284Z" level=info msg="Start streaming server" Aug 13 07:07:57.060812 containerd[1586]: time="2025-08-13T07:07:57.060524006Z" level=info msg="containerd successfully booted in 0.094744s" Aug 13 07:07:57.060753 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:07:57.295604 tar[1581]: linux-amd64/LICENSE Aug 13 07:07:57.298825 tar[1581]: linux-amd64/README.md Aug 13 07:07:57.323119 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:07:57.843154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:07:57.855273 systemd[1]: Started sshd@0-165.232.152.216:22-139.178.89.65:33224.service - OpenSSH per-connection server daemon (139.178.89.65:33224). Aug 13 07:07:57.930922 sshd[1689]: Accepted publickey for core from 139.178.89.65 port 33224 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:57.933137 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:57.948895 systemd-logind[1556]: New session 1 of user core. Aug 13 07:07:57.950415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:07:57.956163 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:07:57.995051 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:07:58.011311 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:07:58.019126 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:07:58.062059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:07:58.065398 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:07:58.079395 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:07:58.161604 systemd[1695]: Queued start job for default target default.target. Aug 13 07:07:58.162717 systemd[1695]: Created slice app.slice - User Application Slice. Aug 13 07:07:58.162751 systemd[1695]: Reached target paths.target - Paths. Aug 13 07:07:58.162767 systemd[1695]: Reached target timers.target - Timers. Aug 13 07:07:58.169002 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:07:58.179805 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:07:58.181101 systemd[1695]: Reached target sockets.target - Sockets. Aug 13 07:07:58.181133 systemd[1695]: Reached target basic.target - Basic System. Aug 13 07:07:58.181218 systemd[1695]: Reached target default.target - Main User Target. Aug 13 07:07:58.181271 systemd[1695]: Startup finished in 147ms. Aug 13 07:07:58.181941 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:07:58.192765 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:07:58.198432 systemd[1]: Startup finished in 7.400s (kernel) + 7.286s (userspace) = 14.687s. Aug 13 07:07:58.272303 systemd[1]: Started sshd@1-165.232.152.216:22-139.178.89.65:33236.service - OpenSSH per-connection server daemon (139.178.89.65:33236). Aug 13 07:07:58.337192 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 33236 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:58.339471 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:58.346875 systemd-logind[1556]: New session 2 of user core. Aug 13 07:07:58.351279 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:07:58.417080 sshd[1724]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:58.426153 systemd[1]: Started sshd@2-165.232.152.216:22-139.178.89.65:33240.service - OpenSSH per-connection server daemon (139.178.89.65:33240). Aug 13 07:07:58.426908 systemd[1]: sshd@1-165.232.152.216:22-139.178.89.65:33236.service: Deactivated successfully. Aug 13 07:07:58.438179 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:07:58.443257 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:07:58.444756 systemd-logind[1556]: Removed session 2. Aug 13 07:07:58.493438 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 33240 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:58.495247 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:58.500387 systemd-logind[1556]: New session 3 of user core. Aug 13 07:07:58.508291 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:07:58.580595 sshd[1729]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:58.592717 systemd[1]: Started sshd@3-165.232.152.216:22-139.178.89.65:33250.service - OpenSSH per-connection server daemon (139.178.89.65:33250). Aug 13 07:07:58.593616 systemd[1]: sshd@2-165.232.152.216:22-139.178.89.65:33240.service: Deactivated successfully. Aug 13 07:07:58.602172 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:07:58.605548 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:07:58.606990 systemd-logind[1556]: Removed session 3. Aug 13 07:07:58.650815 sshd[1738]: Accepted publickey for core from 139.178.89.65 port 33250 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:58.652768 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:58.658976 systemd-logind[1556]: New session 4 of user core. Aug 13 07:07:58.665356 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:07:58.730973 sshd[1738]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:58.741615 systemd[1]: Started sshd@4-165.232.152.216:22-139.178.89.65:33266.service - OpenSSH per-connection server daemon (139.178.89.65:33266). Aug 13 07:07:58.742422 systemd[1]: sshd@3-165.232.152.216:22-139.178.89.65:33250.service: Deactivated successfully. Aug 13 07:07:58.751954 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:07:58.757423 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:07:58.764798 systemd-logind[1556]: Removed session 4. Aug 13 07:07:58.794949 kubelet[1707]: E0813 07:07:58.794874 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:07:58.797721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:07:58.798001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:07:58.804316 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 33266 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:58.805283 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:58.811849 systemd-logind[1556]: New session 5 of user core. Aug 13 07:07:58.821430 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:07:58.899389 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:07:58.900473 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:07:58.924723 sudo[1756]: pam_unix(sudo:session): session closed for user root Aug 13 07:07:58.931272 sshd[1746]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:58.943404 systemd[1]: Started sshd@5-165.232.152.216:22-139.178.89.65:44286.service - OpenSSH per-connection server daemon (139.178.89.65:44286). Aug 13 07:07:58.944236 systemd[1]: sshd@4-165.232.152.216:22-139.178.89.65:33266.service: Deactivated successfully. Aug 13 07:07:58.959195 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:07:58.961983 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:07:58.965421 systemd-logind[1556]: Removed session 5. Aug 13 07:07:59.000634 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 44286 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:59.002519 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:59.010628 systemd-logind[1556]: New session 6 of user core. Aug 13 07:07:59.019479 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:07:59.085541 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:07:59.086184 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:07:59.091444 sudo[1766]: pam_unix(sudo:session): session closed for user root Aug 13 07:07:59.100006 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:07:59.100434 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:07:59.119268 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:07:59.133967 auditctl[1769]: No rules Aug 13 07:07:59.134675 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:07:59.134969 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:07:59.154173 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:07:59.186998 augenrules[1788]: No rules Aug 13 07:07:59.187858 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:07:59.188971 sudo[1765]: pam_unix(sudo:session): session closed for user root Aug 13 07:07:59.194105 sshd[1758]: pam_unix(sshd:session): session closed for user core Aug 13 07:07:59.204280 systemd[1]: Started sshd@6-165.232.152.216:22-139.178.89.65:44302.service - OpenSSH per-connection server daemon (139.178.89.65:44302). Aug 13 07:07:59.204854 systemd[1]: sshd@5-165.232.152.216:22-139.178.89.65:44286.service: Deactivated successfully. Aug 13 07:07:59.210339 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:07:59.212611 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:07:59.215792 systemd-logind[1556]: Removed session 6. Aug 13 07:07:59.247883 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 44302 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:07:59.249542 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:07:59.254698 systemd-logind[1556]: New session 7 of user core. Aug 13 07:07:59.266483 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:07:59.327309 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:07:59.327694 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:07:59.778187 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:07:59.781084 (dockerd)[1816]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:08:00.275921 dockerd[1816]: time="2025-08-13T07:08:00.275851653Z" level=info msg="Starting up" Aug 13 07:08:00.399005 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport552765037-merged.mount: Deactivated successfully. Aug 13 07:08:00.492410 dockerd[1816]: time="2025-08-13T07:08:00.492360302Z" level=info msg="Loading containers: start." Aug 13 07:08:00.624857 kernel: Initializing XFRM netlink socket Aug 13 07:08:00.730764 systemd-networkd[1223]: docker0: Link UP Aug 13 07:08:00.752291 dockerd[1816]: time="2025-08-13T07:08:00.752207949Z" level=info msg="Loading containers: done." Aug 13 07:08:00.773661 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1866745033-merged.mount: Deactivated successfully. Aug 13 07:08:00.776202 dockerd[1816]: time="2025-08-13T07:08:00.776128859Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:08:00.776352 dockerd[1816]: time="2025-08-13T07:08:00.776300770Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:08:00.776505 dockerd[1816]: time="2025-08-13T07:08:00.776465067Z" level=info msg="Daemon has completed initialization" Aug 13 07:08:00.816593 dockerd[1816]: time="2025-08-13T07:08:00.816476316Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:08:00.817018 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:08:01.716379 containerd[1586]: time="2025-08-13T07:08:01.716278465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:08:02.351584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273992701.mount: Deactivated successfully. Aug 13 07:08:03.538495 containerd[1586]: time="2025-08-13T07:08:03.538403585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:03.540304 containerd[1586]: time="2025-08-13T07:08:03.540221649Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:08:03.542833 containerd[1586]: time="2025-08-13T07:08:03.541164239Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:03.546955 containerd[1586]: time="2025-08-13T07:08:03.546882845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:03.552247 containerd[1586]: time="2025-08-13T07:08:03.552185276Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.835840462s" Aug 13 07:08:03.552482 containerd[1586]: time="2025-08-13T07:08:03.552463163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:08:03.553526 containerd[1586]: time="2025-08-13T07:08:03.553403011Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:08:04.886095 containerd[1586]: time="2025-08-13T07:08:04.884765555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:04.886095 containerd[1586]: time="2025-08-13T07:08:04.885872727Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:08:04.886095 containerd[1586]: time="2025-08-13T07:08:04.886033135Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:04.890250 containerd[1586]: time="2025-08-13T07:08:04.890197017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:04.891879 containerd[1586]: time="2025-08-13T07:08:04.891831683Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.338378636s" Aug 13 07:08:04.891879 containerd[1586]: time="2025-08-13T07:08:04.891877499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:08:04.892758 containerd[1586]: time="2025-08-13T07:08:04.892571131Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:08:05.078282 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:08:06.135023 containerd[1586]: time="2025-08-13T07:08:06.134952902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:06.136414 containerd[1586]: time="2025-08-13T07:08:06.136145668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:08:06.137437 containerd[1586]: time="2025-08-13T07:08:06.136987142Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:06.139691 containerd[1586]: time="2025-08-13T07:08:06.139656207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:06.141516 containerd[1586]: time="2025-08-13T07:08:06.141465713Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.248859927s" Aug 13 07:08:06.141516 containerd[1586]: time="2025-08-13T07:08:06.141509569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:08:06.143594 containerd[1586]: time="2025-08-13T07:08:06.143549298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:08:07.214095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824317231.mount: Deactivated successfully. Aug 13 07:08:07.787339 containerd[1586]: time="2025-08-13T07:08:07.786379757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:07.788384 containerd[1586]: time="2025-08-13T07:08:07.788315582Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:08:07.789259 containerd[1586]: time="2025-08-13T07:08:07.789189465Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:07.792395 containerd[1586]: time="2025-08-13T07:08:07.791879724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:07.793211 containerd[1586]: time="2025-08-13T07:08:07.793162802Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.649546004s" Aug 13 07:08:07.793697 containerd[1586]: time="2025-08-13T07:08:07.793217675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:08:07.794183 containerd[1586]: time="2025-08-13T07:08:07.794127111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:08:08.150100 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:08:08.323414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269795548.mount: Deactivated successfully. Aug 13 07:08:08.944371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:08:08.952291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:09.177173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:09.187490 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:08:09.275227 kubelet[2099]: E0813 07:08:09.275063 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:08:09.280255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:08:09.280612 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:08:09.364440 containerd[1586]: time="2025-08-13T07:08:09.364366455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:09.365808 containerd[1586]: time="2025-08-13T07:08:09.365688330Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:08:09.366333 containerd[1586]: time="2025-08-13T07:08:09.366300930Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:09.369250 containerd[1586]: time="2025-08-13T07:08:09.369193393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:09.370811 containerd[1586]: time="2025-08-13T07:08:09.370671802Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.576497953s" Aug 13 07:08:09.370811 containerd[1586]: time="2025-08-13T07:08:09.370723377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:08:09.372299 containerd[1586]: time="2025-08-13T07:08:09.372061466Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:08:09.864077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770388396.mount: Deactivated successfully. Aug 13 07:08:09.868933 containerd[1586]: time="2025-08-13T07:08:09.868070723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:09.869589 containerd[1586]: time="2025-08-13T07:08:09.869541414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:08:09.870399 containerd[1586]: time="2025-08-13T07:08:09.870367510Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:09.872890 containerd[1586]: time="2025-08-13T07:08:09.872858236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:09.873901 containerd[1586]: time="2025-08-13T07:08:09.873857037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.758817ms" Aug 13 07:08:09.874002 containerd[1586]: time="2025-08-13T07:08:09.873908131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:08:09.874537 containerd[1586]: time="2025-08-13T07:08:09.874504577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:08:10.374856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3871645237.mount: Deactivated successfully. Aug 13 07:08:11.222067 systemd-resolved[1480]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 07:08:12.021021 containerd[1586]: time="2025-08-13T07:08:12.020942706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:12.023644 containerd[1586]: time="2025-08-13T07:08:12.023548971Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:08:12.025392 containerd[1586]: time="2025-08-13T07:08:12.025321352Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:12.030486 containerd[1586]: time="2025-08-13T07:08:12.030425700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.155768203s" Aug 13 07:08:12.031964 containerd[1586]: time="2025-08-13T07:08:12.030715524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:08:12.031964 containerd[1586]: time="2025-08-13T07:08:12.030679360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:14.897942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:14.907271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:14.949744 systemd[1]: Reloading requested from client PID 2194 ('systemctl') (unit session-7.scope)... Aug 13 07:08:14.950017 systemd[1]: Reloading... Aug 13 07:08:15.115828 zram_generator::config[2234]: No configuration found. Aug 13 07:08:15.252524 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:15.339873 systemd[1]: Reloading finished in 389 ms. Aug 13 07:08:15.383551 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:08:15.384013 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:08:15.384976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:15.396513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:15.527067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:15.536355 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:08:15.586944 kubelet[2296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:15.586944 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:08:15.586944 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:15.587602 kubelet[2296]: I0813 07:08:15.587008 2296 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:08:15.829056 kubelet[2296]: I0813 07:08:15.828923 2296 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:08:15.829056 kubelet[2296]: I0813 07:08:15.828969 2296 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:08:15.829995 kubelet[2296]: I0813 07:08:15.829958 2296 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:08:15.858370 kubelet[2296]: E0813 07:08:15.858306 2296 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://165.232.152.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:15.859493 kubelet[2296]: I0813 07:08:15.859470 2296 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:08:15.866844 kubelet[2296]: E0813 07:08:15.866666 2296 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:08:15.866844 kubelet[2296]: I0813 07:08:15.866709 2296 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:08:15.871894 kubelet[2296]: I0813 07:08:15.871863 2296 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:08:15.872909 kubelet[2296]: I0813 07:08:15.872880 2296 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:08:15.873432 kubelet[2296]: I0813 07:08:15.873399 2296 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:08:15.873708 kubelet[2296]: I0813 07:08:15.873521 2296 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-e-55e36c071a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:08:15.873893 kubelet[2296]: I0813 07:08:15.873880 2296 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:08:15.874214 kubelet[2296]: I0813 07:08:15.873938 2296 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:08:15.874214 kubelet[2296]: I0813 07:08:15.874067 2296 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:15.878921 kubelet[2296]: I0813 07:08:15.878568 2296 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:08:15.878921 kubelet[2296]: I0813 07:08:15.878622 2296 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:08:15.878921 kubelet[2296]: I0813 07:08:15.878688 2296 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:08:15.878921 kubelet[2296]: I0813 07:08:15.878709 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:08:15.882933 kubelet[2296]: W0813 07:08:15.882870 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.152.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-e-55e36c071a&limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:15.883069 kubelet[2296]: E0813 07:08:15.882960 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.152.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-e-55e36c071a&limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:15.884816 kubelet[2296]: W0813 07:08:15.884217 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.152.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:15.884816 kubelet[2296]: E0813 07:08:15.884273 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.152.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:15.886849 kubelet[2296]: I0813 07:08:15.885147 2296 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:08:15.888329 kubelet[2296]: I0813 07:08:15.888304 2296 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:08:15.888854 kubelet[2296]: W0813 07:08:15.888834 2296 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:08:15.889566 kubelet[2296]: I0813 07:08:15.889506 2296 server.go:1274] "Started kubelet" Aug 13 07:08:15.891653 kubelet[2296]: I0813 07:08:15.890471 2296 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:08:15.896815 kubelet[2296]: I0813 07:08:15.895263 2296 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:08:15.900592 kubelet[2296]: I0813 07:08:15.900451 2296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:08:15.904101 kubelet[2296]: E0813 07:08:15.902863 2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://165.232.152.216:6443/api/v1/namespaces/default/events\": dial tcp 165.232.152.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-e-55e36c071a.185b41dcafd4463a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-e-55e36c071a,UID:ci-4081.3.5-e-55e36c071a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-e-55e36c071a,},FirstTimestamp:2025-08-13 07:08:15.88947513 +0000 UTC m=+0.348840430,LastTimestamp:2025-08-13 07:08:15.88947513 +0000 UTC m=+0.348840430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-e-55e36c071a,}" Aug 13 07:08:15.905333 kubelet[2296]: I0813 07:08:15.904629 2296 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:08:15.906963 kubelet[2296]: I0813 07:08:15.906938 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:08:15.908772 kubelet[2296]: I0813 07:08:15.908298 2296 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:08:15.914581 kubelet[2296]: I0813 07:08:15.914550 2296 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:08:15.914711 kubelet[2296]: E0813 07:08:15.914668 2296 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-e-55e36c071a\" not found" Aug 13 07:08:15.915143 kubelet[2296]: I0813 07:08:15.915124 2296 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:08:15.915422 kubelet[2296]: I0813 07:08:15.915404 2296 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:08:15.916282 kubelet[2296]: I0813 07:08:15.916265 2296 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:08:15.917143 kubelet[2296]: I0813 07:08:15.917106 2296 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:08:15.917636 kubelet[2296]: W0813 07:08:15.917584 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.152.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:15.918039 kubelet[2296]: E0813 07:08:15.918010 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.152.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-e-55e36c071a?timeout=10s\": dial tcp 165.232.152.216:6443: connect: connection refused" interval="200ms" Aug 13 07:08:15.918200 kubelet[2296]: E0813 07:08:15.918129 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.152.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:15.919298 kubelet[2296]: I0813 07:08:15.919284 2296 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:08:15.925025 kubelet[2296]: E0813 07:08:15.924990 2296 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:08:15.947408 kubelet[2296]: I0813 07:08:15.947345 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:08:15.949926 kubelet[2296]: I0813 07:08:15.949881 2296 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:08:15.949926 kubelet[2296]: I0813 07:08:15.949924 2296 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:08:15.950110 kubelet[2296]: I0813 07:08:15.949955 2296 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:08:15.950110 kubelet[2296]: E0813 07:08:15.950026 2296 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:08:15.955477 kubelet[2296]: W0813 07:08:15.955427 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.152.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:15.955626 kubelet[2296]: E0813 07:08:15.955489 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.152.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:15.957047 kubelet[2296]: I0813 07:08:15.956502 2296 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:08:15.957047 kubelet[2296]: I0813 07:08:15.956521 2296 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:08:15.957047 kubelet[2296]: I0813 07:08:15.956540 2296 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:15.958635 kubelet[2296]: I0813 07:08:15.958596 2296 policy_none.go:49] "None policy: Start" Aug 13 07:08:15.959443 kubelet[2296]: I0813 07:08:15.959426 2296 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:08:15.959523 kubelet[2296]: I0813 07:08:15.959451 2296 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:08:15.965830 kubelet[2296]: I0813 07:08:15.965116 2296 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:08:15.965830 kubelet[2296]: I0813 07:08:15.965315 2296 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:08:15.965830 kubelet[2296]: I0813 07:08:15.965326 2296 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:08:15.967256 kubelet[2296]: I0813 07:08:15.967227 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:08:15.971317 kubelet[2296]: E0813 07:08:15.971293 2296 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-e-55e36c071a\" not found" Aug 13 07:08:16.072043 kubelet[2296]: I0813 07:08:16.072004 2296 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.072546 kubelet[2296]: E0813 07:08:16.072521 2296 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.152.216:6443/api/v1/nodes\": dial tcp 165.232.152.216:6443: connect: connection refused" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.118410 kubelet[2296]: I0813 07:08:16.118236 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.118410 kubelet[2296]: I0813 07:08:16.118323 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3cb54c4cc5f5d6f21603c547d197cbe-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-e-55e36c071a\" (UID: \"c3cb54c4cc5f5d6f21603c547d197cbe\") " pod="kube-system/kube-scheduler-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.118410 kubelet[2296]: I0813 07:08:16.118375 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8879756649a787abf12c5e837b047025-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-e-55e36c071a\" (UID: \"8879756649a787abf12c5e837b047025\") " pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.119383 kubelet[2296]: E0813 07:08:16.119199 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.152.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-e-55e36c071a?timeout=10s\": dial tcp 165.232.152.216:6443: connect: connection refused" interval="400ms" Aug 13 07:08:16.119631 kubelet[2296]: I0813 07:08:16.119340 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8879756649a787abf12c5e837b047025-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-e-55e36c071a\" (UID: \"8879756649a787abf12c5e837b047025\") " pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.119877 kubelet[2296]: I0813 07:08:16.119740 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.119877 kubelet[2296]: I0813 07:08:16.119824 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.120165 kubelet[2296]: I0813 07:08:16.119862 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.120165 kubelet[2296]: I0813 07:08:16.120077 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8879756649a787abf12c5e837b047025-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-e-55e36c071a\" (UID: \"8879756649a787abf12c5e837b047025\") " pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.120165 kubelet[2296]: I0813 07:08:16.120132 2296 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.274505 kubelet[2296]: I0813 07:08:16.274185 2296 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.274896 kubelet[2296]: E0813 07:08:16.274855 2296 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.152.216:6443/api/v1/nodes\": dial tcp 165.232.152.216:6443: connect: connection refused" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.356852 kubelet[2296]: E0813 07:08:16.356725 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:16.357864 containerd[1586]: time="2025-08-13T07:08:16.357538016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-e-55e36c071a,Uid:8879756649a787abf12c5e837b047025,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:16.358297 kubelet[2296]: E0813 07:08:16.358007 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:16.364597 containerd[1586]: time="2025-08-13T07:08:16.363946966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-e-55e36c071a,Uid:121370abe61ca9bb893e2a6f1a02e8b5,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:16.366034 kubelet[2296]: E0813 07:08:16.365892 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:16.367073 containerd[1586]: time="2025-08-13T07:08:16.366966232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-e-55e36c071a,Uid:c3cb54c4cc5f5d6f21603c547d197cbe,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:16.520403 kubelet[2296]: E0813 07:08:16.520339 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.152.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-e-55e36c071a?timeout=10s\": dial tcp 165.232.152.216:6443: connect: connection refused" interval="800ms" Aug 13 07:08:16.676527 kubelet[2296]: I0813 07:08:16.676471 2296 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.677067 kubelet[2296]: E0813 07:08:16.676870 2296 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.152.216:6443/api/v1/nodes\": dial tcp 165.232.152.216:6443: connect: connection refused" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:16.831671 kubelet[2296]: W0813 07:08:16.831522 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://165.232.152.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:16.831671 kubelet[2296]: E0813 07:08:16.831616 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://165.232.152.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:16.832748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608384704.mount: Deactivated successfully. Aug 13 07:08:16.836932 containerd[1586]: time="2025-08-13T07:08:16.836892055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:16.837525 containerd[1586]: time="2025-08-13T07:08:16.837479821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:08:16.838186 containerd[1586]: time="2025-08-13T07:08:16.838152014Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:16.839503 containerd[1586]: time="2025-08-13T07:08:16.839459468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:08:16.839709 containerd[1586]: time="2025-08-13T07:08:16.839681495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:16.841264 containerd[1586]: time="2025-08-13T07:08:16.841069287Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:08:16.842079 containerd[1586]: time="2025-08-13T07:08:16.842044015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:16.843179 containerd[1586]: time="2025-08-13T07:08:16.843137714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:08:16.845323 containerd[1586]: time="2025-08-13T07:08:16.845132546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.754542ms" Aug 13 07:08:16.846770 containerd[1586]: time="2025-08-13T07:08:16.846683919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.647633ms" Aug 13 07:08:16.850310 containerd[1586]: time="2025-08-13T07:08:16.850093025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.475366ms" Aug 13 07:08:17.024327 containerd[1586]: time="2025-08-13T07:08:17.024094656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:17.024327 containerd[1586]: time="2025-08-13T07:08:17.024253806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:17.024327 containerd[1586]: time="2025-08-13T07:08:17.024282278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:17.026398 containerd[1586]: time="2025-08-13T07:08:17.026233369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:17.032316 containerd[1586]: time="2025-08-13T07:08:17.032042415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:17.032316 containerd[1586]: time="2025-08-13T07:08:17.032110775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:17.032316 containerd[1586]: time="2025-08-13T07:08:17.032127711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:17.032316 containerd[1586]: time="2025-08-13T07:08:17.032239079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:17.034372 containerd[1586]: time="2025-08-13T07:08:17.034140983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:17.034372 containerd[1586]: time="2025-08-13T07:08:17.034278424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:17.034372 containerd[1586]: time="2025-08-13T07:08:17.034335601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:17.034971 containerd[1586]: time="2025-08-13T07:08:17.034503557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:17.161281 containerd[1586]: time="2025-08-13T07:08:17.159260723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-e-55e36c071a,Uid:8879756649a787abf12c5e837b047025,Namespace:kube-system,Attempt:0,} returns sandbox id \"73077924d0b35efe439766bb443bb0618671d786c74ef65673c09e484ae440d9\"" Aug 13 07:08:17.165740 kubelet[2296]: E0813 07:08:17.165568 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:17.173977 containerd[1586]: time="2025-08-13T07:08:17.173823653Z" level=info msg="CreateContainer within sandbox \"73077924d0b35efe439766bb443bb0618671d786c74ef65673c09e484ae440d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:08:17.175369 containerd[1586]: time="2025-08-13T07:08:17.175286475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-e-55e36c071a,Uid:121370abe61ca9bb893e2a6f1a02e8b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb91ce409d1d9b6fabafcb8875aeb464ccae9fc1dc706cfe75ffe3e0a632022b\"" Aug 13 07:08:17.176346 kubelet[2296]: E0813 07:08:17.176144 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:17.177543 containerd[1586]: time="2025-08-13T07:08:17.177502971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-e-55e36c071a,Uid:c3cb54c4cc5f5d6f21603c547d197cbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dd9d551a0591eeaf2a48f0a54779183e8e121d445a8a9e618390ab3484154ac\"" Aug 13 07:08:17.180808 kubelet[2296]: E0813 07:08:17.180622 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:17.182979 containerd[1586]: time="2025-08-13T07:08:17.182933446Z" level=info msg="CreateContainer within sandbox \"eb91ce409d1d9b6fabafcb8875aeb464ccae9fc1dc706cfe75ffe3e0a632022b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:08:17.183894 containerd[1586]: time="2025-08-13T07:08:17.183859930Z" level=info msg="CreateContainer within sandbox \"5dd9d551a0591eeaf2a48f0a54779183e8e121d445a8a9e618390ab3484154ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:08:17.203104 containerd[1586]: time="2025-08-13T07:08:17.203041139Z" level=info msg="CreateContainer within sandbox \"73077924d0b35efe439766bb443bb0618671d786c74ef65673c09e484ae440d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bab96a4f1018cca557fbb25f26fac06a9cee5aaed545b1c10afcfa7096385a1e\"" Aug 13 07:08:17.204486 kubelet[2296]: W0813 07:08:17.204406 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://165.232.152.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:17.204649 kubelet[2296]: E0813 07:08:17.204506 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://165.232.152.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:17.204744 containerd[1586]: time="2025-08-13T07:08:17.204698960Z" level=info msg="StartContainer for \"bab96a4f1018cca557fbb25f26fac06a9cee5aaed545b1c10afcfa7096385a1e\"" Aug 13 07:08:17.205202 containerd[1586]: time="2025-08-13T07:08:17.205165124Z" level=info msg="CreateContainer within sandbox \"5dd9d551a0591eeaf2a48f0a54779183e8e121d445a8a9e618390ab3484154ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"31b9b10f614803b77ab97aabf225fc60daab028d120114d8631d23146807b315\"" Aug 13 07:08:17.207087 containerd[1586]: time="2025-08-13T07:08:17.207042110Z" level=info msg="StartContainer for \"31b9b10f614803b77ab97aabf225fc60daab028d120114d8631d23146807b315\"" Aug 13 07:08:17.212403 containerd[1586]: time="2025-08-13T07:08:17.212212815Z" level=info msg="CreateContainer within sandbox \"eb91ce409d1d9b6fabafcb8875aeb464ccae9fc1dc706cfe75ffe3e0a632022b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"45ebc95d991d6bc9a9423c1cfc68c73e11251ca02819a0b4a98343889034c757\"" Aug 13 07:08:17.213518 containerd[1586]: time="2025-08-13T07:08:17.213305462Z" level=info msg="StartContainer for \"45ebc95d991d6bc9a9423c1cfc68c73e11251ca02819a0b4a98343889034c757\"" Aug 13 07:08:17.321388 kubelet[2296]: E0813 07:08:17.321339 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://165.232.152.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-e-55e36c071a?timeout=10s\": dial tcp 165.232.152.216:6443: connect: connection refused" interval="1.6s" Aug 13 07:08:17.326633 kubelet[2296]: W0813 07:08:17.326552 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://165.232.152.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:17.326633 kubelet[2296]: E0813 07:08:17.326634 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://165.232.152.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:17.363466 containerd[1586]: time="2025-08-13T07:08:17.362768662Z" level=info msg="StartContainer for \"bab96a4f1018cca557fbb25f26fac06a9cee5aaed545b1c10afcfa7096385a1e\" returns successfully" Aug 13 07:08:17.386233 containerd[1586]: time="2025-08-13T07:08:17.386175810Z" level=info msg="StartContainer for \"31b9b10f614803b77ab97aabf225fc60daab028d120114d8631d23146807b315\" returns successfully" Aug 13 07:08:17.397353 containerd[1586]: time="2025-08-13T07:08:17.397281175Z" level=info msg="StartContainer for \"45ebc95d991d6bc9a9423c1cfc68c73e11251ca02819a0b4a98343889034c757\" returns successfully" Aug 13 07:08:17.467986 kubelet[2296]: W0813 07:08:17.467259 2296 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://165.232.152.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-e-55e36c071a&limit=500&resourceVersion=0": dial tcp 165.232.152.216:6443: connect: connection refused Aug 13 07:08:17.467986 kubelet[2296]: E0813 07:08:17.467364 2296 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://165.232.152.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-e-55e36c071a&limit=500&resourceVersion=0\": dial tcp 165.232.152.216:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:08:17.478229 kubelet[2296]: I0813 07:08:17.478166 2296 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:17.478943 kubelet[2296]: E0813 07:08:17.478754 2296 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://165.232.152.216:6443/api/v1/nodes\": dial tcp 165.232.152.216:6443: connect: connection refused" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:17.968691 kubelet[2296]: E0813 07:08:17.968627 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:17.975803 kubelet[2296]: E0813 07:08:17.973847 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:17.980288 kubelet[2296]: E0813 07:08:17.980258 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:18.988916 kubelet[2296]: E0813 07:08:18.987992 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:19.081909 kubelet[2296]: I0813 07:08:19.080549 2296 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:19.897868 kubelet[2296]: E0813 07:08:19.897817 2296 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-e-55e36c071a\" not found" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:19.979758 kubelet[2296]: I0813 07:08:19.979691 2296 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:19.979758 kubelet[2296]: E0813 07:08:19.979756 2296 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-e-55e36c071a\": node \"ci-4081.3.5-e-55e36c071a\" not found" Aug 13 07:08:19.993670 kubelet[2296]: E0813 07:08:19.993627 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:20.060200 kubelet[2296]: E0813 07:08:20.059905 2296 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-e-55e36c071a\" not found" Aug 13 07:08:20.162802 kubelet[2296]: E0813 07:08:20.161006 2296 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-e-55e36c071a\" not found" Aug 13 07:08:20.261857 kubelet[2296]: E0813 07:08:20.261753 2296 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-e-55e36c071a\" not found" Aug 13 07:08:20.689122 kubelet[2296]: E0813 07:08:20.688775 2296 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:20.689122 kubelet[2296]: E0813 07:08:20.689043 2296 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:20.885229 kubelet[2296]: I0813 07:08:20.885187 2296 apiserver.go:52] "Watching apiserver" Aug 13 07:08:20.916882 kubelet[2296]: I0813 07:08:20.916844 2296 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:08:22.001912 systemd[1]: Reloading requested from client PID 2569 ('systemctl') (unit session-7.scope)... Aug 13 07:08:22.001932 systemd[1]: Reloading... Aug 13 07:08:22.093824 zram_generator::config[2604]: No configuration found. Aug 13 07:08:22.266180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:08:22.365800 systemd[1]: Reloading finished in 363 ms. Aug 13 07:08:22.408687 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:22.429234 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:08:22.429716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:22.438180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:08:22.575097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:08:22.587670 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:08:22.648816 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:22.648816 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:08:22.648816 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:08:22.649556 kubelet[2669]: I0813 07:08:22.648931 2669 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:08:22.661593 kubelet[2669]: I0813 07:08:22.660705 2669 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:08:22.661593 kubelet[2669]: I0813 07:08:22.660766 2669 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:08:22.661593 kubelet[2669]: I0813 07:08:22.661117 2669 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:08:22.663069 kubelet[2669]: I0813 07:08:22.663042 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:08:22.670102 kubelet[2669]: I0813 07:08:22.670057 2669 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:08:22.674016 kubelet[2669]: E0813 07:08:22.673970 2669 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:08:22.674016 kubelet[2669]: I0813 07:08:22.674015 2669 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:08:22.677395 kubelet[2669]: I0813 07:08:22.677355 2669 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:08:22.677879 kubelet[2669]: I0813 07:08:22.677858 2669 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:08:22.678009 kubelet[2669]: I0813 07:08:22.677972 2669 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:08:22.678201 kubelet[2669]: I0813 07:08:22.678009 2669 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-e-55e36c071a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:08:22.678201 kubelet[2669]: I0813 07:08:22.678183 2669 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:08:22.678201 kubelet[2669]: I0813 07:08:22.678192 2669 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:08:22.678354 kubelet[2669]: I0813 07:08:22.678220 2669 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:22.678354 kubelet[2669]: I0813 07:08:22.678324 2669 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:08:22.678354 kubelet[2669]: I0813 07:08:22.678336 2669 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:08:22.678436 kubelet[2669]: I0813 07:08:22.678371 2669 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:08:22.678436 kubelet[2669]: I0813 07:08:22.678390 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:08:22.694898 kubelet[2669]: I0813 07:08:22.693237 2669 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:08:22.694898 kubelet[2669]: I0813 07:08:22.693827 2669 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:08:22.694898 kubelet[2669]: I0813 07:08:22.694338 2669 server.go:1274] "Started kubelet" Aug 13 07:08:22.701369 kubelet[2669]: I0813 07:08:22.701333 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:08:22.705460 kubelet[2669]: I0813 07:08:22.705401 2669 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:08:22.709460 kubelet[2669]: I0813 07:08:22.709427 2669 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:08:22.711478 kubelet[2669]: I0813 07:08:22.711428 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:08:22.712210 kubelet[2669]: I0813 07:08:22.712186 2669 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:08:22.713856 kubelet[2669]: I0813 07:08:22.713833 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:08:22.718528 kubelet[2669]: I0813 07:08:22.718493 2669 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:08:22.725101 kubelet[2669]: I0813 07:08:22.725056 2669 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:08:22.725582 kubelet[2669]: I0813 07:08:22.725557 2669 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:08:22.729379 kubelet[2669]: I0813 07:08:22.729331 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:08:22.732808 kubelet[2669]: I0813 07:08:22.732274 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:08:22.732808 kubelet[2669]: I0813 07:08:22.732320 2669 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:08:22.732808 kubelet[2669]: I0813 07:08:22.732346 2669 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:08:22.732808 kubelet[2669]: E0813 07:08:22.732411 2669 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:08:22.736506 kubelet[2669]: I0813 07:08:22.736466 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:08:22.740932 kubelet[2669]: E0813 07:08:22.740894 2669 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:08:22.747004 kubelet[2669]: I0813 07:08:22.746603 2669 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:08:22.749857 kubelet[2669]: I0813 07:08:22.748831 2669 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:08:22.832086 kubelet[2669]: I0813 07:08:22.830531 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:08:22.832657 kubelet[2669]: I0813 07:08:22.832314 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:08:22.832657 kubelet[2669]: I0813 07:08:22.832349 2669 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:08:22.832657 kubelet[2669]: I0813 07:08:22.832525 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:08:22.832657 kubelet[2669]: I0813 07:08:22.832536 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:08:22.832657 kubelet[2669]: I0813 07:08:22.832556 2669 policy_none.go:49] "None policy: Start" Aug 13 07:08:22.832908 kubelet[2669]: E0813 07:08:22.832674 2669 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:08:22.833813 kubelet[2669]: I0813 07:08:22.833755 2669 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:08:22.833813 kubelet[2669]: I0813 07:08:22.833811 2669 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:08:22.834061 kubelet[2669]: I0813 07:08:22.834043 2669 state_mem.go:75] "Updated machine memory state" Aug 13 07:08:22.835809 kubelet[2669]: I0813 07:08:22.835702 2669 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:08:22.835933 kubelet[2669]: I0813 07:08:22.835921 2669 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:08:22.835975 kubelet[2669]: I0813 07:08:22.835936 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:08:22.836879 kubelet[2669]: I0813 07:08:22.836858 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:08:22.945347 kubelet[2669]: I0813 07:08:22.945290 2669 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:22.953403 kubelet[2669]: I0813 07:08:22.953288 2669 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:22.953855 kubelet[2669]: I0813 07:08:22.953739 2669 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.042409 kubelet[2669]: W0813 07:08:23.041852 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:23.043726 kubelet[2669]: W0813 07:08:23.043290 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:23.044679 kubelet[2669]: W0813 07:08:23.044330 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:23.128033 kubelet[2669]: I0813 07:08:23.127895 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3cb54c4cc5f5d6f21603c547d197cbe-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-e-55e36c071a\" (UID: \"c3cb54c4cc5f5d6f21603c547d197cbe\") " pod="kube-system/kube-scheduler-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128033 kubelet[2669]: I0813 07:08:23.127991 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128730 kubelet[2669]: I0813 07:08:23.128050 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128730 kubelet[2669]: I0813 07:08:23.128075 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128730 kubelet[2669]: I0813 07:08:23.128101 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128730 kubelet[2669]: I0813 07:08:23.128523 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8879756649a787abf12c5e837b047025-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-e-55e36c071a\" (UID: \"8879756649a787abf12c5e837b047025\") " pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128730 kubelet[2669]: I0813 07:08:23.128541 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8879756649a787abf12c5e837b047025-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-e-55e36c071a\" (UID: \"8879756649a787abf12c5e837b047025\") " pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128976 kubelet[2669]: I0813 07:08:23.128556 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8879756649a787abf12c5e837b047025-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-e-55e36c071a\" (UID: \"8879756649a787abf12c5e837b047025\") " pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.128976 kubelet[2669]: I0813 07:08:23.128572 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/121370abe61ca9bb893e2a6f1a02e8b5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" (UID: \"121370abe61ca9bb893e2a6f1a02e8b5\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.343406 kubelet[2669]: E0813 07:08:23.342979 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:23.345096 kubelet[2669]: E0813 07:08:23.345049 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:23.346156 kubelet[2669]: E0813 07:08:23.345325 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:23.680093 kubelet[2669]: I0813 07:08:23.680034 2669 apiserver.go:52] "Watching apiserver" Aug 13 07:08:23.726278 kubelet[2669]: I0813 07:08:23.726220 2669 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:08:23.780846 kubelet[2669]: E0813 07:08:23.779009 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:23.780846 kubelet[2669]: E0813 07:08:23.779603 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:23.803343 kubelet[2669]: W0813 07:08:23.803296 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:08:23.803541 kubelet[2669]: E0813 07:08:23.803373 2669 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.5-e-55e36c071a\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" Aug 13 07:08:23.803589 kubelet[2669]: E0813 07:08:23.803563 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:23.837420 kubelet[2669]: I0813 07:08:23.837301 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-e-55e36c071a" podStartSLOduration=0.837279211 podStartE2EDuration="837.279211ms" podCreationTimestamp="2025-08-13 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:23.83727451 +0000 UTC m=+1.240437947" watchObservedRunningTime="2025-08-13 07:08:23.837279211 +0000 UTC m=+1.240442650" Aug 13 07:08:23.883637 kubelet[2669]: I0813 07:08:23.883566 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-e-55e36c071a" podStartSLOduration=0.883543707 podStartE2EDuration="883.543707ms" podCreationTimestamp="2025-08-13 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:23.860270177 +0000 UTC m=+1.263433614" watchObservedRunningTime="2025-08-13 07:08:23.883543707 +0000 UTC m=+1.286707141" Aug 13 07:08:23.922593 kubelet[2669]: I0813 07:08:23.922535 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-e-55e36c071a" podStartSLOduration=0.922512176 podStartE2EDuration="922.512176ms" podCreationTimestamp="2025-08-13 07:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:23.884045227 +0000 UTC m=+1.287208664" watchObservedRunningTime="2025-08-13 07:08:23.922512176 +0000 UTC m=+1.325675606" Aug 13 07:08:24.781899 kubelet[2669]: E0813 07:08:24.781551 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:24.782774 kubelet[2669]: E0813 07:08:24.782750 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:25.241825 kubelet[2669]: E0813 07:08:25.241339 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:28.728029 kubelet[2669]: I0813 07:08:28.727986 2669 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:08:28.735038 containerd[1586]: time="2025-08-13T07:08:28.730082043Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:08:28.736069 kubelet[2669]: I0813 07:08:28.732048 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:08:29.376027 kubelet[2669]: I0813 07:08:29.375984 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c62a1d9-bf60-4016-8623-87d98bec9fa8-lib-modules\") pod \"kube-proxy-8gm7n\" (UID: \"7c62a1d9-bf60-4016-8623-87d98bec9fa8\") " pod="kube-system/kube-proxy-8gm7n" Aug 13 07:08:29.376210 kubelet[2669]: I0813 07:08:29.376039 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c62a1d9-bf60-4016-8623-87d98bec9fa8-xtables-lock\") pod \"kube-proxy-8gm7n\" (UID: \"7c62a1d9-bf60-4016-8623-87d98bec9fa8\") " pod="kube-system/kube-proxy-8gm7n" Aug 13 07:08:29.376210 kubelet[2669]: I0813 07:08:29.376070 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c62a1d9-bf60-4016-8623-87d98bec9fa8-kube-proxy\") pod \"kube-proxy-8gm7n\" (UID: \"7c62a1d9-bf60-4016-8623-87d98bec9fa8\") " pod="kube-system/kube-proxy-8gm7n" Aug 13 07:08:29.376210 kubelet[2669]: I0813 07:08:29.376105 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxp75\" (UniqueName: \"kubernetes.io/projected/7c62a1d9-bf60-4016-8623-87d98bec9fa8-kube-api-access-hxp75\") pod \"kube-proxy-8gm7n\" (UID: \"7c62a1d9-bf60-4016-8623-87d98bec9fa8\") " pod="kube-system/kube-proxy-8gm7n" Aug 13 07:08:29.487069 kubelet[2669]: E0813 07:08:29.487016 2669 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 07:08:29.487069 kubelet[2669]: E0813 07:08:29.487071 2669 projected.go:194] Error preparing data for projected volume kube-api-access-hxp75 for pod kube-system/kube-proxy-8gm7n: configmap "kube-root-ca.crt" not found Aug 13 07:08:29.487298 kubelet[2669]: E0813 07:08:29.487152 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7c62a1d9-bf60-4016-8623-87d98bec9fa8-kube-api-access-hxp75 podName:7c62a1d9-bf60-4016-8623-87d98bec9fa8 nodeName:}" failed. No retries permitted until 2025-08-13 07:08:29.987124878 +0000 UTC m=+7.390288307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hxp75" (UniqueName: "kubernetes.io/projected/7c62a1d9-bf60-4016-8623-87d98bec9fa8-kube-api-access-hxp75") pod "kube-proxy-8gm7n" (UID: "7c62a1d9-bf60-4016-8623-87d98bec9fa8") : configmap "kube-root-ca.crt" not found Aug 13 07:08:29.879906 kubelet[2669]: I0813 07:08:29.879849 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bf04221d-686e-4b1c-8976-0fe84795da1e-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-ljndq\" (UID: \"bf04221d-686e-4b1c-8976-0fe84795da1e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-ljndq" Aug 13 07:08:29.879906 kubelet[2669]: I0813 07:08:29.879905 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls9t5\" (UniqueName: \"kubernetes.io/projected/bf04221d-686e-4b1c-8976-0fe84795da1e-kube-api-access-ls9t5\") pod \"tigera-operator-5bf8dfcb4-ljndq\" (UID: \"bf04221d-686e-4b1c-8976-0fe84795da1e\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-ljndq" Aug 13 07:08:30.151753 containerd[1586]: time="2025-08-13T07:08:30.151629106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-ljndq,Uid:bf04221d-686e-4b1c-8976-0fe84795da1e,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:08:30.183955 containerd[1586]: time="2025-08-13T07:08:30.183779900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:30.184338 containerd[1586]: time="2025-08-13T07:08:30.184165836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:30.184338 containerd[1586]: time="2025-08-13T07:08:30.184236025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:30.184592 containerd[1586]: time="2025-08-13T07:08:30.184543125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:30.257840 containerd[1586]: time="2025-08-13T07:08:30.257727220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-ljndq,Uid:bf04221d-686e-4b1c-8976-0fe84795da1e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ef8e2e697fe32ff3dd26860c85c4ff1a34b27cc299b7878eabdd6bd2a3ad30a\"" Aug 13 07:08:30.259691 kubelet[2669]: E0813 07:08:30.259650 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:30.261390 containerd[1586]: time="2025-08-13T07:08:30.261149801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:08:30.261826 containerd[1586]: time="2025-08-13T07:08:30.261211080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gm7n,Uid:7c62a1d9-bf60-4016-8623-87d98bec9fa8,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:30.295805 containerd[1586]: time="2025-08-13T07:08:30.295352997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:30.295805 containerd[1586]: time="2025-08-13T07:08:30.295474747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:30.296254 containerd[1586]: time="2025-08-13T07:08:30.296171513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:30.297548 containerd[1586]: time="2025-08-13T07:08:30.297468383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:30.348828 containerd[1586]: time="2025-08-13T07:08:30.348761249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gm7n,Uid:7c62a1d9-bf60-4016-8623-87d98bec9fa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7b7c59d2f3f2ac8c1f26f8cd6827dabaa4325dc5d028405f651e26d16dbd27\"" Aug 13 07:08:30.350284 kubelet[2669]: E0813 07:08:30.349849 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:30.359608 containerd[1586]: time="2025-08-13T07:08:30.356774583Z" level=info msg="CreateContainer within sandbox \"0a7b7c59d2f3f2ac8c1f26f8cd6827dabaa4325dc5d028405f651e26d16dbd27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:08:30.378139 containerd[1586]: time="2025-08-13T07:08:30.378086903Z" level=info msg="CreateContainer within sandbox \"0a7b7c59d2f3f2ac8c1f26f8cd6827dabaa4325dc5d028405f651e26d16dbd27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"456ed86cdac410b4cd48166920b75c679ebed076a0a6b7112dc5e7bb0fd871f9\"" Aug 13 07:08:30.379561 containerd[1586]: time="2025-08-13T07:08:30.378744535Z" level=info msg="StartContainer for \"456ed86cdac410b4cd48166920b75c679ebed076a0a6b7112dc5e7bb0fd871f9\"" Aug 13 07:08:30.448358 containerd[1586]: time="2025-08-13T07:08:30.448314269Z" level=info msg="StartContainer for \"456ed86cdac410b4cd48166920b75c679ebed076a0a6b7112dc5e7bb0fd871f9\" returns successfully" Aug 13 07:08:30.804427 kubelet[2669]: E0813 07:08:30.804306 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:31.664419 kubelet[2669]: E0813 07:08:31.664375 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:31.682962 kubelet[2669]: I0813 07:08:31.682879 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8gm7n" podStartSLOduration=2.682851951 podStartE2EDuration="2.682851951s" podCreationTimestamp="2025-08-13 07:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:08:30.818651631 +0000 UTC m=+8.221815081" watchObservedRunningTime="2025-08-13 07:08:31.682851951 +0000 UTC m=+9.086015402" Aug 13 07:08:31.810817 kubelet[2669]: E0813 07:08:31.810512 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:32.162069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4298386.mount: Deactivated successfully. Aug 13 07:08:32.815622 kubelet[2669]: E0813 07:08:32.814685 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:32.913808 containerd[1586]: time="2025-08-13T07:08:32.913728061Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:32.914723 containerd[1586]: time="2025-08-13T07:08:32.914674427Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:08:32.915497 containerd[1586]: time="2025-08-13T07:08:32.915216963Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:32.917799 containerd[1586]: time="2025-08-13T07:08:32.917750994Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:32.918639 containerd[1586]: time="2025-08-13T07:08:32.918607644Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.657309872s" Aug 13 07:08:32.918749 containerd[1586]: time="2025-08-13T07:08:32.918731195Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:08:32.923166 containerd[1586]: time="2025-08-13T07:08:32.923114210Z" level=info msg="CreateContainer within sandbox \"9ef8e2e697fe32ff3dd26860c85c4ff1a34b27cc299b7878eabdd6bd2a3ad30a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:08:32.935385 containerd[1586]: time="2025-08-13T07:08:32.935164543Z" level=info msg="CreateContainer within sandbox \"9ef8e2e697fe32ff3dd26860c85c4ff1a34b27cc299b7878eabdd6bd2a3ad30a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7f4c665d555c5469539c12a8d2cb1b062eb84646319bd717ac9d715d5ebdad42\"" Aug 13 07:08:32.936507 containerd[1586]: time="2025-08-13T07:08:32.936308493Z" level=info msg="StartContainer for \"7f4c665d555c5469539c12a8d2cb1b062eb84646319bd717ac9d715d5ebdad42\"" Aug 13 07:08:33.029593 containerd[1586]: time="2025-08-13T07:08:33.029468019Z" level=info msg="StartContainer for \"7f4c665d555c5469539c12a8d2cb1b062eb84646319bd717ac9d715d5ebdad42\" returns successfully" Aug 13 07:08:33.576710 kubelet[2669]: E0813 07:08:33.576643 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:35.256816 kubelet[2669]: E0813 07:08:35.256678 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:35.331508 kubelet[2669]: I0813 07:08:35.331329 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-ljndq" podStartSLOduration=3.670831582 podStartE2EDuration="6.331304323s" podCreationTimestamp="2025-08-13 07:08:29 +0000 UTC" firstStartedPulling="2025-08-13 07:08:30.259518373 +0000 UTC m=+7.662681804" lastFinishedPulling="2025-08-13 07:08:32.919991116 +0000 UTC m=+10.323154545" observedRunningTime="2025-08-13 07:08:33.831368741 +0000 UTC m=+11.234532179" watchObservedRunningTime="2025-08-13 07:08:35.331304323 +0000 UTC m=+12.734467829" Aug 13 07:08:40.277931 sudo[1801]: pam_unix(sudo:session): session closed for user root Aug 13 07:08:40.282663 sshd[1794]: pam_unix(sshd:session): session closed for user core Aug 13 07:08:40.292422 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:08:40.292765 systemd[1]: sshd@6-165.232.152.216:22-139.178.89.65:44302.service: Deactivated successfully. Aug 13 07:08:40.295300 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:08:40.306974 systemd-logind[1556]: Removed session 7. Aug 13 07:08:41.513135 update_engine[1560]: I20250813 07:08:41.511874 1560 update_attempter.cc:509] Updating boot flags... Aug 13 07:08:41.604050 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3064) Aug 13 07:08:41.713951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3066) Aug 13 07:08:44.691774 kubelet[2669]: I0813 07:08:44.688150 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a604f3ef-9bfb-455f-984f-28d61920a075-tigera-ca-bundle\") pod \"calico-typha-6b4c84c7c5-2lww2\" (UID: \"a604f3ef-9bfb-455f-984f-28d61920a075\") " pod="calico-system/calico-typha-6b4c84c7c5-2lww2" Aug 13 07:08:44.691774 kubelet[2669]: I0813 07:08:44.688212 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a604f3ef-9bfb-455f-984f-28d61920a075-typha-certs\") pod \"calico-typha-6b4c84c7c5-2lww2\" (UID: \"a604f3ef-9bfb-455f-984f-28d61920a075\") " pod="calico-system/calico-typha-6b4c84c7c5-2lww2" Aug 13 07:08:44.691774 kubelet[2669]: I0813 07:08:44.688253 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qj57\" (UniqueName: \"kubernetes.io/projected/a604f3ef-9bfb-455f-984f-28d61920a075-kube-api-access-2qj57\") pod \"calico-typha-6b4c84c7c5-2lww2\" (UID: \"a604f3ef-9bfb-455f-984f-28d61920a075\") " pod="calico-system/calico-typha-6b4c84c7c5-2lww2" Aug 13 07:08:44.972178 kubelet[2669]: E0813 07:08:44.972045 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:44.979986 containerd[1586]: time="2025-08-13T07:08:44.979078185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4c84c7c5-2lww2,Uid:a604f3ef-9bfb-455f-984f-28d61920a075,Namespace:calico-system,Attempt:0,}" Aug 13 07:08:44.990733 kubelet[2669]: I0813 07:08:44.990663 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztl6g\" (UniqueName: \"kubernetes.io/projected/a2744b9b-c482-4274-9cae-3f30aef053eb-kube-api-access-ztl6g\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.990733 kubelet[2669]: I0813 07:08:44.990720 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2744b9b-c482-4274-9cae-3f30aef053eb-tigera-ca-bundle\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.990733 kubelet[2669]: I0813 07:08:44.990746 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-var-lib-calico\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999520 kubelet[2669]: I0813 07:08:44.990771 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-cni-net-dir\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999520 kubelet[2669]: I0813 07:08:44.990810 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-var-run-calico\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999520 kubelet[2669]: I0813 07:08:44.990850 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-cni-bin-dir\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999520 kubelet[2669]: I0813 07:08:44.990874 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-policysync\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999520 kubelet[2669]: I0813 07:08:44.990903 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-xtables-lock\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999664 kubelet[2669]: I0813 07:08:44.990928 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-lib-modules\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999664 kubelet[2669]: I0813 07:08:44.990950 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2744b9b-c482-4274-9cae-3f30aef053eb-node-certs\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:44.999664 kubelet[2669]: I0813 07:08:44.990979 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-cni-log-dir\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:45.003888 kubelet[2669]: I0813 07:08:45.000018 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2744b9b-c482-4274-9cae-3f30aef053eb-flexvol-driver-host\") pod \"calico-node-lpvss\" (UID: \"a2744b9b-c482-4274-9cae-3f30aef053eb\") " pod="calico-system/calico-node-lpvss" Aug 13 07:08:45.049897 containerd[1586]: time="2025-08-13T07:08:45.048177596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:45.050067 containerd[1586]: time="2025-08-13T07:08:45.049857095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:45.050067 containerd[1586]: time="2025-08-13T07:08:45.049887894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:45.050243 containerd[1586]: time="2025-08-13T07:08:45.050051405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:45.116968 kubelet[2669]: E0813 07:08:45.116925 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.116968 kubelet[2669]: W0813 07:08:45.116958 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.117287 kubelet[2669]: E0813 07:08:45.117010 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.119656 kubelet[2669]: E0813 07:08:45.118877 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.119656 kubelet[2669]: W0813 07:08:45.118899 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.119656 kubelet[2669]: E0813 07:08:45.118923 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.127544 kubelet[2669]: E0813 07:08:45.127493 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.127915 kubelet[2669]: W0813 07:08:45.127769 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.128073 kubelet[2669]: E0813 07:08:45.127988 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.191610 kubelet[2669]: E0813 07:08:45.191538 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:45.232632 containerd[1586]: time="2025-08-13T07:08:45.232352927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4c84c7c5-2lww2,Uid:a604f3ef-9bfb-455f-984f-28d61920a075,Namespace:calico-system,Attempt:0,} returns sandbox id \"01955b185c5b3f072afb732a9fa814f54e233d309242b80c9a584c5a7558764f\"" Aug 13 07:08:45.237721 kubelet[2669]: E0813 07:08:45.237674 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:45.242187 containerd[1586]: time="2025-08-13T07:08:45.241642395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:08:45.264667 containerd[1586]: time="2025-08-13T07:08:45.264571069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpvss,Uid:a2744b9b-c482-4274-9cae-3f30aef053eb,Namespace:calico-system,Attempt:0,}" Aug 13 07:08:45.281948 kubelet[2669]: E0813 07:08:45.281911 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.283065 kubelet[2669]: W0813 07:08:45.282852 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.283065 kubelet[2669]: E0813 07:08:45.282907 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.285155 kubelet[2669]: E0813 07:08:45.284683 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.285155 kubelet[2669]: W0813 07:08:45.284710 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.285155 kubelet[2669]: E0813 07:08:45.284741 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.286993 kubelet[2669]: E0813 07:08:45.286820 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.286993 kubelet[2669]: W0813 07:08:45.286839 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.286993 kubelet[2669]: E0813 07:08:45.286864 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.287812 kubelet[2669]: E0813 07:08:45.287516 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.287812 kubelet[2669]: W0813 07:08:45.287534 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.287812 kubelet[2669]: E0813 07:08:45.287557 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.289049 kubelet[2669]: E0813 07:08:45.289033 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.289337 kubelet[2669]: W0813 07:08:45.289316 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.289518 kubelet[2669]: E0813 07:08:45.289408 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.289845 kubelet[2669]: E0813 07:08:45.289828 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.290083 kubelet[2669]: W0813 07:08:45.289944 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.290083 kubelet[2669]: E0813 07:08:45.289968 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.290513 kubelet[2669]: E0813 07:08:45.290488 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.290710 kubelet[2669]: W0813 07:08:45.290587 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.290710 kubelet[2669]: E0813 07:08:45.290607 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.291013 kubelet[2669]: E0813 07:08:45.291000 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.291212 kubelet[2669]: W0813 07:08:45.291082 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.291212 kubelet[2669]: E0813 07:08:45.291100 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.291579 kubelet[2669]: E0813 07:08:45.291471 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.291579 kubelet[2669]: W0813 07:08:45.291482 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.291579 kubelet[2669]: E0813 07:08:45.291494 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.292104 kubelet[2669]: E0813 07:08:45.291958 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.292104 kubelet[2669]: W0813 07:08:45.291970 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.292104 kubelet[2669]: E0813 07:08:45.291982 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.293045 kubelet[2669]: E0813 07:08:45.292890 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.293045 kubelet[2669]: W0813 07:08:45.292918 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.293045 kubelet[2669]: E0813 07:08:45.292932 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.293384 kubelet[2669]: E0813 07:08:45.293292 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.293384 kubelet[2669]: W0813 07:08:45.293304 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.293384 kubelet[2669]: E0813 07:08:45.293329 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.293842 kubelet[2669]: E0813 07:08:45.293738 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.293842 kubelet[2669]: W0813 07:08:45.293749 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.293842 kubelet[2669]: E0813 07:08:45.293761 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.296817 kubelet[2669]: E0813 07:08:45.295987 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.296817 kubelet[2669]: W0813 07:08:45.296025 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.296817 kubelet[2669]: E0813 07:08:45.296048 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.296817 kubelet[2669]: E0813 07:08:45.296335 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.296817 kubelet[2669]: W0813 07:08:45.296352 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.296817 kubelet[2669]: E0813 07:08:45.296373 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.296817 kubelet[2669]: E0813 07:08:45.296603 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.296817 kubelet[2669]: W0813 07:08:45.296614 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.296817 kubelet[2669]: E0813 07:08:45.296629 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.298985 kubelet[2669]: E0813 07:08:45.297371 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.298985 kubelet[2669]: W0813 07:08:45.297384 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.298985 kubelet[2669]: E0813 07:08:45.297508 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.298985 kubelet[2669]: E0813 07:08:45.298765 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.298985 kubelet[2669]: W0813 07:08:45.298811 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.298985 kubelet[2669]: E0813 07:08:45.298838 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.299675 kubelet[2669]: E0813 07:08:45.299496 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.299675 kubelet[2669]: W0813 07:08:45.299516 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.299675 kubelet[2669]: E0813 07:08:45.299549 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.300128 kubelet[2669]: E0813 07:08:45.300115 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.300335 kubelet[2669]: W0813 07:08:45.300180 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.300335 kubelet[2669]: E0813 07:08:45.300202 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.303317 kubelet[2669]: E0813 07:08:45.303285 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.303587 kubelet[2669]: W0813 07:08:45.303556 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.303748 kubelet[2669]: E0813 07:08:45.303714 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.304093 kubelet[2669]: I0813 07:08:45.303920 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bd634c8d-a482-4f95-9b3b-58b3c5eafd08-registration-dir\") pod \"csi-node-driver-2gcv6\" (UID: \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\") " pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:45.304889 kubelet[2669]: E0813 07:08:45.304869 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.304889 kubelet[2669]: W0813 07:08:45.304887 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.305133 kubelet[2669]: E0813 07:08:45.304911 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.305133 kubelet[2669]: E0813 07:08:45.305128 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.305199 kubelet[2669]: W0813 07:08:45.305167 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.305199 kubelet[2669]: E0813 07:08:45.305188 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.306360 kubelet[2669]: E0813 07:08:45.306129 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.306360 kubelet[2669]: W0813 07:08:45.306145 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.306360 kubelet[2669]: E0813 07:08:45.306159 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.306360 kubelet[2669]: I0813 07:08:45.306196 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd634c8d-a482-4f95-9b3b-58b3c5eafd08-kubelet-dir\") pod \"csi-node-driver-2gcv6\" (UID: \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\") " pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:45.306919 kubelet[2669]: E0813 07:08:45.306757 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.306919 kubelet[2669]: W0813 07:08:45.306773 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.306919 kubelet[2669]: E0813 07:08:45.306816 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.307720 kubelet[2669]: E0813 07:08:45.307508 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.307720 kubelet[2669]: W0813 07:08:45.307532 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.307720 kubelet[2669]: E0813 07:08:45.307559 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.308440 kubelet[2669]: E0813 07:08:45.308254 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.308440 kubelet[2669]: W0813 07:08:45.308268 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.308440 kubelet[2669]: E0813 07:08:45.308289 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.308440 kubelet[2669]: I0813 07:08:45.308411 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bd634c8d-a482-4f95-9b3b-58b3c5eafd08-socket-dir\") pod \"csi-node-driver-2gcv6\" (UID: \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\") " pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:45.310096 kubelet[2669]: E0813 07:08:45.309932 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.310096 kubelet[2669]: W0813 07:08:45.309951 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.310096 kubelet[2669]: E0813 07:08:45.309973 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.310492 kubelet[2669]: E0813 07:08:45.310241 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.310492 kubelet[2669]: W0813 07:08:45.310254 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.310492 kubelet[2669]: E0813 07:08:45.310356 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.310594 kubelet[2669]: E0813 07:08:45.310510 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.310594 kubelet[2669]: W0813 07:08:45.310518 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.310594 kubelet[2669]: E0813 07:08:45.310529 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.310594 kubelet[2669]: I0813 07:08:45.310559 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bd634c8d-a482-4f95-9b3b-58b3c5eafd08-varrun\") pod \"csi-node-driver-2gcv6\" (UID: \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\") " pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:45.310946 kubelet[2669]: E0813 07:08:45.310792 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.310946 kubelet[2669]: W0813 07:08:45.310806 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.310946 kubelet[2669]: E0813 07:08:45.310818 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.310946 kubelet[2669]: I0813 07:08:45.310839 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhk9\" (UniqueName: \"kubernetes.io/projected/bd634c8d-a482-4f95-9b3b-58b3c5eafd08-kube-api-access-6hhk9\") pod \"csi-node-driver-2gcv6\" (UID: \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\") " pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:45.311443 kubelet[2669]: E0813 07:08:45.311056 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.311443 kubelet[2669]: W0813 07:08:45.311067 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.311443 kubelet[2669]: E0813 07:08:45.311082 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.311443 kubelet[2669]: E0813 07:08:45.311300 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.311443 kubelet[2669]: W0813 07:08:45.311310 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.311443 kubelet[2669]: E0813 07:08:45.311324 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.312056 kubelet[2669]: E0813 07:08:45.311865 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.312056 kubelet[2669]: W0813 07:08:45.311882 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.312056 kubelet[2669]: E0813 07:08:45.311895 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.312220 kubelet[2669]: E0813 07:08:45.312126 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.312220 kubelet[2669]: W0813 07:08:45.312139 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.312220 kubelet[2669]: E0813 07:08:45.312153 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.314726 containerd[1586]: time="2025-08-13T07:08:45.314482872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:08:45.314726 containerd[1586]: time="2025-08-13T07:08:45.314586897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:08:45.314726 containerd[1586]: time="2025-08-13T07:08:45.314607545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:45.315037 containerd[1586]: time="2025-08-13T07:08:45.314813525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:08:45.396812 containerd[1586]: time="2025-08-13T07:08:45.395757168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpvss,Uid:a2744b9b-c482-4274-9cae-3f30aef053eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\"" Aug 13 07:08:45.412564 kubelet[2669]: E0813 07:08:45.412435 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.412564 kubelet[2669]: W0813 07:08:45.412464 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.412564 kubelet[2669]: E0813 07:08:45.412500 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.412997 kubelet[2669]: E0813 07:08:45.412818 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.412997 kubelet[2669]: W0813 07:08:45.412832 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.412997 kubelet[2669]: E0813 07:08:45.412849 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.413320 kubelet[2669]: E0813 07:08:45.413125 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.413320 kubelet[2669]: W0813 07:08:45.413141 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.413320 kubelet[2669]: E0813 07:08:45.413163 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.413443 kubelet[2669]: E0813 07:08:45.413433 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.413489 kubelet[2669]: W0813 07:08:45.413480 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.413559 kubelet[2669]: E0813 07:08:45.413545 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.413829 kubelet[2669]: E0813 07:08:45.413809 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.413829 kubelet[2669]: W0813 07:08:45.413826 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.413910 kubelet[2669]: E0813 07:08:45.413844 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.414067 kubelet[2669]: E0813 07:08:45.414052 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.414106 kubelet[2669]: W0813 07:08:45.414068 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.414255 kubelet[2669]: E0813 07:08:45.414180 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.414289 kubelet[2669]: E0813 07:08:45.414285 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.414322 kubelet[2669]: W0813 07:08:45.414292 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.414387 kubelet[2669]: E0813 07:08:45.414364 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.414565 kubelet[2669]: E0813 07:08:45.414548 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.414565 kubelet[2669]: W0813 07:08:45.414560 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.414660 kubelet[2669]: E0813 07:08:45.414586 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.414889 kubelet[2669]: E0813 07:08:45.414876 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.414889 kubelet[2669]: W0813 07:08:45.414887 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.414987 kubelet[2669]: E0813 07:08:45.414914 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.415211 kubelet[2669]: E0813 07:08:45.415196 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.415283 kubelet[2669]: W0813 07:08:45.415209 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.415283 kubelet[2669]: E0813 07:08:45.415238 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.415459 kubelet[2669]: E0813 07:08:45.415446 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.415459 kubelet[2669]: W0813 07:08:45.415457 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.415545 kubelet[2669]: E0813 07:08:45.415534 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.415722 kubelet[2669]: E0813 07:08:45.415700 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.415765 kubelet[2669]: W0813 07:08:45.415713 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.415863 kubelet[2669]: E0813 07:08:45.415822 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.416035 kubelet[2669]: E0813 07:08:45.416022 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.416085 kubelet[2669]: W0813 07:08:45.416073 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.416159 kubelet[2669]: E0813 07:08:45.416107 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.416355 kubelet[2669]: E0813 07:08:45.416326 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.416355 kubelet[2669]: W0813 07:08:45.416349 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.416445 kubelet[2669]: E0813 07:08:45.416371 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.416597 kubelet[2669]: E0813 07:08:45.416584 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.416652 kubelet[2669]: W0813 07:08:45.416601 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.416652 kubelet[2669]: E0813 07:08:45.416615 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.417138 kubelet[2669]: E0813 07:08:45.417024 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.417138 kubelet[2669]: W0813 07:08:45.417038 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.417138 kubelet[2669]: E0813 07:08:45.417051 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.417310 kubelet[2669]: E0813 07:08:45.417294 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.417461 kubelet[2669]: W0813 07:08:45.417372 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.417461 kubelet[2669]: E0813 07:08:45.417424 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.418868 kubelet[2669]: E0813 07:08:45.418832 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.418868 kubelet[2669]: W0813 07:08:45.418858 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.418995 kubelet[2669]: E0813 07:08:45.418905 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.419240 kubelet[2669]: E0813 07:08:45.419219 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.419240 kubelet[2669]: W0813 07:08:45.419240 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.419308 kubelet[2669]: E0813 07:08:45.419267 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.419999 kubelet[2669]: E0813 07:08:45.419977 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.419999 kubelet[2669]: W0813 07:08:45.419997 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.420421 kubelet[2669]: E0813 07:08:45.420161 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.420477 kubelet[2669]: E0813 07:08:45.420441 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.420477 kubelet[2669]: W0813 07:08:45.420452 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.420862 kubelet[2669]: E0813 07:08:45.420541 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.420862 kubelet[2669]: E0813 07:08:45.420766 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.420862 kubelet[2669]: W0813 07:08:45.420776 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.420862 kubelet[2669]: E0813 07:08:45.420812 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.421286 kubelet[2669]: E0813 07:08:45.421270 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.421286 kubelet[2669]: W0813 07:08:45.421283 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.421576 kubelet[2669]: E0813 07:08:45.421558 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.423723 kubelet[2669]: E0813 07:08:45.423641 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.423723 kubelet[2669]: W0813 07:08:45.423681 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.423723 kubelet[2669]: E0813 07:08:45.423701 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.425163 kubelet[2669]: E0813 07:08:45.425136 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.425163 kubelet[2669]: W0813 07:08:45.425154 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.425261 kubelet[2669]: E0813 07:08:45.425171 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:45.435446 kubelet[2669]: E0813 07:08:45.435386 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:45.435446 kubelet[2669]: W0813 07:08:45.435412 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:45.435446 kubelet[2669]: E0813 07:08:45.435436 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:46.539122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657514187.mount: Deactivated successfully. Aug 13 07:08:46.734156 kubelet[2669]: E0813 07:08:46.734106 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:47.432222 containerd[1586]: time="2025-08-13T07:08:47.430465051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:47.441214 containerd[1586]: time="2025-08-13T07:08:47.440834521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:08:47.447819 containerd[1586]: time="2025-08-13T07:08:47.442937868Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:47.559943 containerd[1586]: time="2025-08-13T07:08:47.559893256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:47.564189 containerd[1586]: time="2025-08-13T07:08:47.564136632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.322439195s" Aug 13 07:08:47.564423 containerd[1586]: time="2025-08-13T07:08:47.564398034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:08:47.568891 containerd[1586]: time="2025-08-13T07:08:47.568832447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:08:47.676026 containerd[1586]: time="2025-08-13T07:08:47.674606660Z" level=info msg="CreateContainer within sandbox \"01955b185c5b3f072afb732a9fa814f54e233d309242b80c9a584c5a7558764f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:08:47.709759 containerd[1586]: time="2025-08-13T07:08:47.709397362Z" level=info msg="CreateContainer within sandbox \"01955b185c5b3f072afb732a9fa814f54e233d309242b80c9a584c5a7558764f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"512ab3b7b5669dedbdb16ef7df0b990e805310b72b9e6623b746822cf93639e8\"" Aug 13 07:08:47.714352 containerd[1586]: time="2025-08-13T07:08:47.713669110Z" level=info msg="StartContainer for \"512ab3b7b5669dedbdb16ef7df0b990e805310b72b9e6623b746822cf93639e8\"" Aug 13 07:08:47.717452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778617032.mount: Deactivated successfully. Aug 13 07:08:47.918889 containerd[1586]: time="2025-08-13T07:08:47.916868491Z" level=info msg="StartContainer for \"512ab3b7b5669dedbdb16ef7df0b990e805310b72b9e6623b746822cf93639e8\" returns successfully" Aug 13 07:08:48.733077 kubelet[2669]: E0813 07:08:48.733020 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:48.879606 kubelet[2669]: E0813 07:08:48.876694 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:48.930821 kubelet[2669]: E0813 07:08:48.930423 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.933413 kubelet[2669]: W0813 07:08:48.931992 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.933413 kubelet[2669]: E0813 07:08:48.932063 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.937364 kubelet[2669]: E0813 07:08:48.936953 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.937364 kubelet[2669]: W0813 07:08:48.936986 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.937364 kubelet[2669]: E0813 07:08:48.937019 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.940261 kubelet[2669]: E0813 07:08:48.940219 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.940261 kubelet[2669]: W0813 07:08:48.940247 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.940261 kubelet[2669]: E0813 07:08:48.940274 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.943815 kubelet[2669]: E0813 07:08:48.943377 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.943815 kubelet[2669]: W0813 07:08:48.943408 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.943815 kubelet[2669]: E0813 07:08:48.943435 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.944093 kubelet[2669]: E0813 07:08:48.943964 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.944093 kubelet[2669]: W0813 07:08:48.943980 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.944093 kubelet[2669]: E0813 07:08:48.943998 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.947149 kubelet[2669]: E0813 07:08:48.945995 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.947149 kubelet[2669]: W0813 07:08:48.946017 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.947149 kubelet[2669]: E0813 07:08:48.946035 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.947149 kubelet[2669]: E0813 07:08:48.946476 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.947149 kubelet[2669]: W0813 07:08:48.946492 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.947149 kubelet[2669]: E0813 07:08:48.946512 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.947149 kubelet[2669]: E0813 07:08:48.946723 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.947149 kubelet[2669]: W0813 07:08:48.946732 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.947149 kubelet[2669]: E0813 07:08:48.946743 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949212 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.950845 kubelet[2669]: W0813 07:08:48.949237 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949261 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949461 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.950845 kubelet[2669]: W0813 07:08:48.949471 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949486 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949687 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.950845 kubelet[2669]: W0813 07:08:48.949697 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949708 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.950845 kubelet[2669]: E0813 07:08:48.949897 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.951250 kubelet[2669]: W0813 07:08:48.949904 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.951250 kubelet[2669]: E0813 07:08:48.949913 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.955834 kubelet[2669]: E0813 07:08:48.952987 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.955834 kubelet[2669]: W0813 07:08:48.953015 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.955834 kubelet[2669]: E0813 07:08:48.953041 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.955834 kubelet[2669]: E0813 07:08:48.953666 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.955834 kubelet[2669]: W0813 07:08:48.953683 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.955834 kubelet[2669]: E0813 07:08:48.953701 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.957044 kubelet[2669]: E0813 07:08:48.956395 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.957044 kubelet[2669]: W0813 07:08:48.956418 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.957044 kubelet[2669]: E0813 07:08:48.956446 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.961921 kubelet[2669]: E0813 07:08:48.960043 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.961921 kubelet[2669]: W0813 07:08:48.960081 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.961921 kubelet[2669]: E0813 07:08:48.960114 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.962142 kubelet[2669]: E0813 07:08:48.962080 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.962142 kubelet[2669]: W0813 07:08:48.962106 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.962196 kubelet[2669]: E0813 07:08:48.962137 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.964816 kubelet[2669]: E0813 07:08:48.964502 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.964816 kubelet[2669]: W0813 07:08:48.964539 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.964816 kubelet[2669]: E0813 07:08:48.964574 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.967924 kubelet[2669]: E0813 07:08:48.967142 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.967924 kubelet[2669]: W0813 07:08:48.967181 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.967924 kubelet[2669]: E0813 07:08:48.967908 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.971088 kubelet[2669]: E0813 07:08:48.970009 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.971088 kubelet[2669]: W0813 07:08:48.970037 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.971088 kubelet[2669]: E0813 07:08:48.970981 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.973169 kubelet[2669]: E0813 07:08:48.971886 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.973169 kubelet[2669]: W0813 07:08:48.971912 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.973169 kubelet[2669]: E0813 07:08:48.973099 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.975040 kubelet[2669]: E0813 07:08:48.973916 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.975040 kubelet[2669]: W0813 07:08:48.973940 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.975040 kubelet[2669]: E0813 07:08:48.974915 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.975255 kubelet[2669]: E0813 07:08:48.975133 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.975255 kubelet[2669]: W0813 07:08:48.975151 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.977279 kubelet[2669]: E0813 07:08:48.975372 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.978039 kubelet[2669]: E0813 07:08:48.977084 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.978039 kubelet[2669]: W0813 07:08:48.977862 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.978039 kubelet[2669]: E0813 07:08:48.977933 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.980804 kubelet[2669]: E0813 07:08:48.979053 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.980804 kubelet[2669]: W0813 07:08:48.979076 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.980804 kubelet[2669]: E0813 07:08:48.979211 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.983586 kubelet[2669]: E0813 07:08:48.981118 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.983586 kubelet[2669]: W0813 07:08:48.981143 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.983586 kubelet[2669]: E0813 07:08:48.981186 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.983586 kubelet[2669]: E0813 07:08:48.983420 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.983586 kubelet[2669]: W0813 07:08:48.983445 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.984449 kubelet[2669]: E0813 07:08:48.984418 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.987372 kubelet[2669]: E0813 07:08:48.985999 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.987372 kubelet[2669]: W0813 07:08:48.986888 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.988006 kubelet[2669]: E0813 07:08:48.987932 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.988513 kubelet[2669]: E0813 07:08:48.988492 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.991069 kubelet[2669]: W0813 07:08:48.988923 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.991666 kubelet[2669]: E0813 07:08:48.991467 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.994473 kubelet[2669]: E0813 07:08:48.991853 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.994473 kubelet[2669]: W0813 07:08:48.991873 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.995816 kubelet[2669]: E0813 07:08:48.995179 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:48.998825 kubelet[2669]: E0813 07:08:48.997203 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:48.999111 kubelet[2669]: W0813 07:08:48.999078 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:48.999821 kubelet[2669]: E0813 07:08:48.999418 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:49.002029 kubelet[2669]: E0813 07:08:49.001988 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:49.002029 kubelet[2669]: W0813 07:08:49.002026 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:49.002323 kubelet[2669]: E0813 07:08:49.002071 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:49.007882 kubelet[2669]: E0813 07:08:49.006095 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:08:49.007882 kubelet[2669]: W0813 07:08:49.006133 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:08:49.007882 kubelet[2669]: E0813 07:08:49.006169 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:08:49.054828 containerd[1586]: time="2025-08-13T07:08:49.054734671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:49.056585 containerd[1586]: time="2025-08-13T07:08:49.056513405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:08:49.090293 containerd[1586]: time="2025-08-13T07:08:49.090082176Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:49.095163 containerd[1586]: time="2025-08-13T07:08:49.094949932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:49.098855 containerd[1586]: time="2025-08-13T07:08:49.098707804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.529606311s" Aug 13 07:08:49.099033 containerd[1586]: time="2025-08-13T07:08:49.098969259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:08:49.105924 containerd[1586]: time="2025-08-13T07:08:49.105657008Z" level=info msg="CreateContainer within sandbox \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:08:49.124976 containerd[1586]: time="2025-08-13T07:08:49.124919626Z" level=info msg="CreateContainer within sandbox \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f\"" Aug 13 07:08:49.126804 containerd[1586]: time="2025-08-13T07:08:49.126737356Z" level=info msg="StartContainer for \"e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f\"" Aug 13 07:08:49.200842 systemd[1]: run-containerd-runc-k8s.io-e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f-runc.l53M87.mount: Deactivated successfully. Aug 13 07:08:49.237852 containerd[1586]: time="2025-08-13T07:08:49.237704779Z" level=info msg="StartContainer for \"e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f\" returns successfully" Aug 13 07:08:49.348996 containerd[1586]: time="2025-08-13T07:08:49.311849568Z" level=info msg="shim disconnected" id=e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f namespace=k8s.io Aug 13 07:08:49.348996 containerd[1586]: time="2025-08-13T07:08:49.348720819Z" level=warning msg="cleaning up after shim disconnected" id=e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f namespace=k8s.io Aug 13 07:08:49.348996 containerd[1586]: time="2025-08-13T07:08:49.348741886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:08:49.583837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e70e4376043aa4dc9b59c4211f5d2b2733889a931d9ef84556a2d0321278e52f-rootfs.mount: Deactivated successfully. Aug 13 07:08:49.883862 kubelet[2669]: I0813 07:08:49.883341 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:08:49.885967 kubelet[2669]: E0813 07:08:49.885515 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:49.886862 containerd[1586]: time="2025-08-13T07:08:49.886560947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:08:49.911870 kubelet[2669]: I0813 07:08:49.911456 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b4c84c7c5-2lww2" podStartSLOduration=3.5841713840000002 podStartE2EDuration="5.911435956s" podCreationTimestamp="2025-08-13 07:08:44 +0000 UTC" firstStartedPulling="2025-08-13 07:08:45.240358437 +0000 UTC m=+22.643521853" lastFinishedPulling="2025-08-13 07:08:47.567622989 +0000 UTC m=+24.970786425" observedRunningTime="2025-08-13 07:08:48.914877805 +0000 UTC m=+26.318041269" watchObservedRunningTime="2025-08-13 07:08:49.911435956 +0000 UTC m=+27.314599427" Aug 13 07:08:50.734364 kubelet[2669]: E0813 07:08:50.734073 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:52.733907 kubelet[2669]: E0813 07:08:52.733221 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:52.966838 containerd[1586]: time="2025-08-13T07:08:52.966791500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:52.968848 containerd[1586]: time="2025-08-13T07:08:52.968035271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:08:52.969051 containerd[1586]: time="2025-08-13T07:08:52.968905281Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:52.972821 containerd[1586]: time="2025-08-13T07:08:52.972741993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:08:52.973679 containerd[1586]: time="2025-08-13T07:08:52.973648381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.087052155s" Aug 13 07:08:52.973763 containerd[1586]: time="2025-08-13T07:08:52.973684818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:08:52.979671 containerd[1586]: time="2025-08-13T07:08:52.979003465Z" level=info msg="CreateContainer within sandbox \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:08:53.004364 containerd[1586]: time="2025-08-13T07:08:53.004130572Z" level=info msg="CreateContainer within sandbox \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3\"" Aug 13 07:08:53.006868 containerd[1586]: time="2025-08-13T07:08:53.006823788Z" level=info msg="StartContainer for \"730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3\"" Aug 13 07:08:53.077561 systemd[1]: run-containerd-runc-k8s.io-730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3-runc.HRM7CM.mount: Deactivated successfully. Aug 13 07:08:53.119462 containerd[1586]: time="2025-08-13T07:08:53.119363655Z" level=info msg="StartContainer for \"730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3\" returns successfully" Aug 13 07:08:53.806827 containerd[1586]: time="2025-08-13T07:08:53.806069453Z" level=info msg="shim disconnected" id=730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3 namespace=k8s.io Aug 13 07:08:53.806827 containerd[1586]: time="2025-08-13T07:08:53.806151281Z" level=warning msg="cleaning up after shim disconnected" id=730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3 namespace=k8s.io Aug 13 07:08:53.806827 containerd[1586]: time="2025-08-13T07:08:53.806166051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:08:53.864987 kubelet[2669]: I0813 07:08:53.864864 2669 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:08:53.976805 containerd[1586]: time="2025-08-13T07:08:53.976515782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:08:54.001754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-730f87f17cd52869d7ddf2c9a7a2ef3e344b4eba5df0c0c0e28b53d63cf479b3-rootfs.mount: Deactivated successfully. Aug 13 07:08:54.004939 kubelet[2669]: I0813 07:08:54.002582 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csv6k\" (UniqueName: \"kubernetes.io/projected/64736ca2-7b5a-4fe7-b839-c4d57531ab36-kube-api-access-csv6k\") pod \"whisker-5b967454b-hf5vt\" (UID: \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\") " pod="calico-system/whisker-5b967454b-hf5vt" Aug 13 07:08:54.006207 kubelet[2669]: I0813 07:08:54.005760 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/30df6160-80aa-4f0d-92aa-0f0db6a04acd-goldmane-key-pair\") pod \"goldmane-58fd7646b9-v27pf\" (UID: \"30df6160-80aa-4f0d-92aa-0f0db6a04acd\") " pod="calico-system/goldmane-58fd7646b9-v27pf" Aug 13 07:08:54.006207 kubelet[2669]: I0813 07:08:54.005856 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c568768c-f9a9-47ff-bd2e-11cbdcfd7596-tigera-ca-bundle\") pod \"calico-kube-controllers-5df6b56ccd-xwrlr\" (UID: \"c568768c-f9a9-47ff-bd2e-11cbdcfd7596\") " pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" Aug 13 07:08:54.006207 kubelet[2669]: I0813 07:08:54.006172 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99g8\" (UniqueName: \"kubernetes.io/projected/20826453-f382-41f8-a572-6376d276da48-kube-api-access-g99g8\") pod \"calico-apiserver-7c4567fdc-7fml2\" (UID: \"20826453-f382-41f8-a572-6376d276da48\") " pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" Aug 13 07:08:54.008066 kubelet[2669]: I0813 07:08:54.007639 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjqj\" (UniqueName: \"kubernetes.io/projected/fe78265f-f9af-4623-a337-884c31c36ef2-kube-api-access-dzjqj\") pod \"coredns-7c65d6cfc9-ctkfn\" (UID: \"fe78265f-f9af-4623-a337-884c31c36ef2\") " pod="kube-system/coredns-7c65d6cfc9-ctkfn" Aug 13 07:08:54.008066 kubelet[2669]: I0813 07:08:54.007694 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30df6160-80aa-4f0d-92aa-0f0db6a04acd-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-v27pf\" (UID: \"30df6160-80aa-4f0d-92aa-0f0db6a04acd\") " pod="calico-system/goldmane-58fd7646b9-v27pf" Aug 13 07:08:54.017157 kubelet[2669]: I0813 07:08:54.007728 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8x44\" (UniqueName: \"kubernetes.io/projected/30df6160-80aa-4f0d-92aa-0f0db6a04acd-kube-api-access-b8x44\") pod \"goldmane-58fd7646b9-v27pf\" (UID: \"30df6160-80aa-4f0d-92aa-0f0db6a04acd\") " pod="calico-system/goldmane-58fd7646b9-v27pf" Aug 13 07:08:54.020061 kubelet[2669]: I0813 07:08:54.019205 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcsf6\" (UniqueName: \"kubernetes.io/projected/f0685533-643f-4d9d-85a8-1d45cf68c77e-kube-api-access-jcsf6\") pod \"calico-apiserver-5b5498b48d-ns55v\" (UID: \"f0685533-643f-4d9d-85a8-1d45cf68c77e\") " pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" Aug 13 07:08:54.020061 kubelet[2669]: I0813 07:08:54.019251 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe78265f-f9af-4623-a337-884c31c36ef2-config-volume\") pod \"coredns-7c65d6cfc9-ctkfn\" (UID: \"fe78265f-f9af-4623-a337-884c31c36ef2\") " pod="kube-system/coredns-7c65d6cfc9-ctkfn" Aug 13 07:08:54.020061 kubelet[2669]: I0813 07:08:54.019282 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20826453-f382-41f8-a572-6376d276da48-calico-apiserver-certs\") pod \"calico-apiserver-7c4567fdc-7fml2\" (UID: \"20826453-f382-41f8-a572-6376d276da48\") " pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" Aug 13 07:08:54.020061 kubelet[2669]: I0813 07:08:54.019298 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72e07df5-008f-4b2d-94ac-56f5a048d8f4-config-volume\") pod \"coredns-7c65d6cfc9-5njd8\" (UID: \"72e07df5-008f-4b2d-94ac-56f5a048d8f4\") " pod="kube-system/coredns-7c65d6cfc9-5njd8" Aug 13 07:08:54.020061 kubelet[2669]: I0813 07:08:54.019313 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmz6\" (UniqueName: \"kubernetes.io/projected/72e07df5-008f-4b2d-94ac-56f5a048d8f4-kube-api-access-rqmz6\") pod \"coredns-7c65d6cfc9-5njd8\" (UID: \"72e07df5-008f-4b2d-94ac-56f5a048d8f4\") " pod="kube-system/coredns-7c65d6cfc9-5njd8" Aug 13 07:08:54.020344 kubelet[2669]: I0813 07:08:54.019327 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0685533-643f-4d9d-85a8-1d45cf68c77e-calico-apiserver-certs\") pod \"calico-apiserver-5b5498b48d-ns55v\" (UID: \"f0685533-643f-4d9d-85a8-1d45cf68c77e\") " pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" Aug 13 07:08:54.020344 kubelet[2669]: I0813 07:08:54.019344 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99077a63-9db7-4cec-a6a2-af9cb28b57de-calico-apiserver-certs\") pod \"calico-apiserver-5b5498b48d-m2ftn\" (UID: \"99077a63-9db7-4cec-a6a2-af9cb28b57de\") " pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" Aug 13 07:08:54.020344 kubelet[2669]: I0813 07:08:54.019359 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94rbl\" (UniqueName: \"kubernetes.io/projected/99077a63-9db7-4cec-a6a2-af9cb28b57de-kube-api-access-94rbl\") pod \"calico-apiserver-5b5498b48d-m2ftn\" (UID: \"99077a63-9db7-4cec-a6a2-af9cb28b57de\") " pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" Aug 13 07:08:54.020344 kubelet[2669]: I0813 07:08:54.019385 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-backend-key-pair\") pod \"whisker-5b967454b-hf5vt\" (UID: \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\") " pod="calico-system/whisker-5b967454b-hf5vt" Aug 13 07:08:54.020344 kubelet[2669]: I0813 07:08:54.019478 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-ca-bundle\") pod \"whisker-5b967454b-hf5vt\" (UID: \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\") " pod="calico-system/whisker-5b967454b-hf5vt" Aug 13 07:08:54.020480 kubelet[2669]: I0813 07:08:54.019497 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30df6160-80aa-4f0d-92aa-0f0db6a04acd-config\") pod \"goldmane-58fd7646b9-v27pf\" (UID: \"30df6160-80aa-4f0d-92aa-0f0db6a04acd\") " pod="calico-system/goldmane-58fd7646b9-v27pf" Aug 13 07:08:54.020480 kubelet[2669]: I0813 07:08:54.019514 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5dr\" (UniqueName: \"kubernetes.io/projected/c568768c-f9a9-47ff-bd2e-11cbdcfd7596-kube-api-access-dk5dr\") pod \"calico-kube-controllers-5df6b56ccd-xwrlr\" (UID: \"c568768c-f9a9-47ff-bd2e-11cbdcfd7596\") " pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" Aug 13 07:08:54.229393 kubelet[2669]: E0813 07:08:54.229356 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:54.233232 containerd[1586]: time="2025-08-13T07:08:54.232998307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5njd8,Uid:72e07df5-008f-4b2d-94ac-56f5a048d8f4,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:54.248821 containerd[1586]: time="2025-08-13T07:08:54.248661037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-ns55v,Uid:f0685533-643f-4d9d-85a8-1d45cf68c77e,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:08:54.252749 kubelet[2669]: E0813 07:08:54.252541 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:08:54.255198 containerd[1586]: time="2025-08-13T07:08:54.253480603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ctkfn,Uid:fe78265f-f9af-4623-a337-884c31c36ef2,Namespace:kube-system,Attempt:0,}" Aug 13 07:08:54.265077 containerd[1586]: time="2025-08-13T07:08:54.265031527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6b56ccd-xwrlr,Uid:c568768c-f9a9-47ff-bd2e-11cbdcfd7596,Namespace:calico-system,Attempt:0,}" Aug 13 07:08:54.269019 containerd[1586]: time="2025-08-13T07:08:54.268670767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-v27pf,Uid:30df6160-80aa-4f0d-92aa-0f0db6a04acd,Namespace:calico-system,Attempt:0,}" Aug 13 07:08:54.270945 containerd[1586]: time="2025-08-13T07:08:54.270911857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b967454b-hf5vt,Uid:64736ca2-7b5a-4fe7-b839-c4d57531ab36,Namespace:calico-system,Attempt:0,}" Aug 13 07:08:54.308525 containerd[1586]: time="2025-08-13T07:08:54.308161819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-m2ftn,Uid:99077a63-9db7-4cec-a6a2-af9cb28b57de,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:08:54.344850 containerd[1586]: time="2025-08-13T07:08:54.344801662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c4567fdc-7fml2,Uid:20826453-f382-41f8-a572-6376d276da48,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:08:54.688256 containerd[1586]: time="2025-08-13T07:08:54.688059985Z" level=error msg="Failed to destroy network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.702717 containerd[1586]: time="2025-08-13T07:08:54.702662910Z" level=error msg="Failed to destroy network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.708749 containerd[1586]: time="2025-08-13T07:08:54.708308028Z" level=error msg="encountered an error cleaning up failed sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.709074 containerd[1586]: time="2025-08-13T07:08:54.708820068Z" level=error msg="encountered an error cleaning up failed sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.737902 containerd[1586]: time="2025-08-13T07:08:54.737841478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5njd8,Uid:72e07df5-008f-4b2d-94ac-56f5a048d8f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.746377 containerd[1586]: time="2025-08-13T07:08:54.746267789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-ns55v,Uid:f0685533-643f-4d9d-85a8-1d45cf68c77e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.750374 containerd[1586]: time="2025-08-13T07:08:54.750315469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gcv6,Uid:bd634c8d-a482-4f95-9b3b-58b3c5eafd08,Namespace:calico-system,Attempt:0,}" Aug 13 07:08:54.763308 containerd[1586]: time="2025-08-13T07:08:54.763210118Z" level=error msg="Failed to destroy network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.763604 containerd[1586]: time="2025-08-13T07:08:54.763578270Z" level=error msg="encountered an error cleaning up failed sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.763671 containerd[1586]: time="2025-08-13T07:08:54.763633192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c4567fdc-7fml2,Uid:20826453-f382-41f8-a572-6376d276da48,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.767047 kubelet[2669]: E0813 07:08:54.766733 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.767047 kubelet[2669]: E0813 07:08:54.766881 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.767047 kubelet[2669]: E0813 07:08:54.766935 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.769280 kubelet[2669]: E0813 07:08:54.767210 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" Aug 13 07:08:54.769280 kubelet[2669]: E0813 07:08:54.767290 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" Aug 13 07:08:54.769280 kubelet[2669]: E0813 07:08:54.767265 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" Aug 13 07:08:54.769280 kubelet[2669]: E0813 07:08:54.767391 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" Aug 13 07:08:54.769418 kubelet[2669]: E0813 07:08:54.767445 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5498b48d-ns55v_calico-apiserver(f0685533-643f-4d9d-85a8-1d45cf68c77e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5498b48d-ns55v_calico-apiserver(f0685533-643f-4d9d-85a8-1d45cf68c77e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" podUID="f0685533-643f-4d9d-85a8-1d45cf68c77e" Aug 13 07:08:54.769418 kubelet[2669]: E0813 07:08:54.767251 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5njd8" Aug 13 07:08:54.769418 kubelet[2669]: E0813 07:08:54.767539 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5njd8" Aug 13 07:08:54.769546 kubelet[2669]: E0813 07:08:54.767576 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5njd8_kube-system(72e07df5-008f-4b2d-94ac-56f5a048d8f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5njd8_kube-system(72e07df5-008f-4b2d-94ac-56f5a048d8f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5njd8" podUID="72e07df5-008f-4b2d-94ac-56f5a048d8f4" Aug 13 07:08:54.769546 kubelet[2669]: E0813 07:08:54.767358 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c4567fdc-7fml2_calico-apiserver(20826453-f382-41f8-a572-6376d276da48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c4567fdc-7fml2_calico-apiserver(20826453-f382-41f8-a572-6376d276da48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" podUID="20826453-f382-41f8-a572-6376d276da48" Aug 13 07:08:54.792834 containerd[1586]: time="2025-08-13T07:08:54.792724066Z" level=error msg="Failed to destroy network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.795356 containerd[1586]: time="2025-08-13T07:08:54.794701157Z" level=error msg="encountered an error cleaning up failed sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.795968 containerd[1586]: time="2025-08-13T07:08:54.795329501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-v27pf,Uid:30df6160-80aa-4f0d-92aa-0f0db6a04acd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.796289 kubelet[2669]: E0813 07:08:54.796142 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.796419 kubelet[2669]: E0813 07:08:54.796272 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-v27pf" Aug 13 07:08:54.796419 kubelet[2669]: E0813 07:08:54.796400 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-v27pf" Aug 13 07:08:54.796580 kubelet[2669]: E0813 07:08:54.796468 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-v27pf_calico-system(30df6160-80aa-4f0d-92aa-0f0db6a04acd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-v27pf_calico-system(30df6160-80aa-4f0d-92aa-0f0db6a04acd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-v27pf" podUID="30df6160-80aa-4f0d-92aa-0f0db6a04acd" Aug 13 07:08:54.808699 containerd[1586]: time="2025-08-13T07:08:54.808505531Z" level=error msg="Failed to destroy network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.808855 containerd[1586]: time="2025-08-13T07:08:54.808741622Z" level=error msg="Failed to destroy network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.809289 containerd[1586]: time="2025-08-13T07:08:54.809149709Z" level=error msg="encountered an error cleaning up failed sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.809289 containerd[1586]: time="2025-08-13T07:08:54.809224942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ctkfn,Uid:fe78265f-f9af-4623-a337-884c31c36ef2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.809405 containerd[1586]: time="2025-08-13T07:08:54.809378123Z" level=error msg="encountered an error cleaning up failed sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.809549 containerd[1586]: time="2025-08-13T07:08:54.809433099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-m2ftn,Uid:99077a63-9db7-4cec-a6a2-af9cb28b57de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.809728 kubelet[2669]: E0813 07:08:54.809685 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.809804 kubelet[2669]: E0813 07:08:54.809765 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" Aug 13 07:08:54.809842 kubelet[2669]: E0813 07:08:54.809815 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" Aug 13 07:08:54.809952 kubelet[2669]: E0813 07:08:54.809866 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5498b48d-m2ftn_calico-apiserver(99077a63-9db7-4cec-a6a2-af9cb28b57de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5498b48d-m2ftn_calico-apiserver(99077a63-9db7-4cec-a6a2-af9cb28b57de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" podUID="99077a63-9db7-4cec-a6a2-af9cb28b57de" Aug 13 07:08:54.810855 kubelet[2669]: E0813 07:08:54.810125 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.810855 kubelet[2669]: E0813 07:08:54.810158 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ctkfn" Aug 13 07:08:54.810855 kubelet[2669]: E0813 07:08:54.810178 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ctkfn" Aug 13 07:08:54.811012 kubelet[2669]: E0813 07:08:54.810214 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ctkfn_kube-system(fe78265f-f9af-4623-a337-884c31c36ef2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ctkfn_kube-system(fe78265f-f9af-4623-a337-884c31c36ef2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ctkfn" podUID="fe78265f-f9af-4623-a337-884c31c36ef2" Aug 13 07:08:54.814248 containerd[1586]: time="2025-08-13T07:08:54.813450274Z" level=error msg="Failed to destroy network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.814527 containerd[1586]: time="2025-08-13T07:08:54.814461665Z" level=error msg="Failed to destroy network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.816080 containerd[1586]: time="2025-08-13T07:08:54.816037767Z" level=error msg="encountered an error cleaning up failed sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.816184 containerd[1586]: time="2025-08-13T07:08:54.816099906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b967454b-hf5vt,Uid:64736ca2-7b5a-4fe7-b839-c4d57531ab36,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.817008 containerd[1586]: time="2025-08-13T07:08:54.816626991Z" level=error msg="encountered an error cleaning up failed sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.817008 containerd[1586]: time="2025-08-13T07:08:54.816687272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6b56ccd-xwrlr,Uid:c568768c-f9a9-47ff-bd2e-11cbdcfd7596,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.817847 kubelet[2669]: E0813 07:08:54.816333 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.817847 kubelet[2669]: E0813 07:08:54.816407 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b967454b-hf5vt" Aug 13 07:08:54.817847 kubelet[2669]: E0813 07:08:54.816436 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b967454b-hf5vt" Aug 13 07:08:54.818489 kubelet[2669]: E0813 07:08:54.816482 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b967454b-hf5vt_calico-system(64736ca2-7b5a-4fe7-b839-c4d57531ab36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b967454b-hf5vt_calico-system(64736ca2-7b5a-4fe7-b839-c4d57531ab36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b967454b-hf5vt" podUID="64736ca2-7b5a-4fe7-b839-c4d57531ab36" Aug 13 07:08:54.818489 kubelet[2669]: E0813 07:08:54.816999 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.818489 kubelet[2669]: E0813 07:08:54.817034 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" Aug 13 07:08:54.818678 kubelet[2669]: E0813 07:08:54.817064 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" Aug 13 07:08:54.818678 kubelet[2669]: E0813 07:08:54.817100 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5df6b56ccd-xwrlr_calico-system(c568768c-f9a9-47ff-bd2e-11cbdcfd7596)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5df6b56ccd-xwrlr_calico-system(c568768c-f9a9-47ff-bd2e-11cbdcfd7596)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" podUID="c568768c-f9a9-47ff-bd2e-11cbdcfd7596" Aug 13 07:08:54.874806 containerd[1586]: time="2025-08-13T07:08:54.874681592Z" level=error msg="Failed to destroy network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.875152 containerd[1586]: time="2025-08-13T07:08:54.875117089Z" level=error msg="encountered an error cleaning up failed sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.875232 containerd[1586]: time="2025-08-13T07:08:54.875171827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gcv6,Uid:bd634c8d-a482-4f95-9b3b-58b3c5eafd08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.875805 kubelet[2669]: E0813 07:08:54.875498 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:54.875805 kubelet[2669]: E0813 07:08:54.875579 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:54.875805 kubelet[2669]: E0813 07:08:54.875614 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2gcv6" Aug 13 07:08:54.877272 kubelet[2669]: E0813 07:08:54.875670 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2gcv6_calico-system(bd634c8d-a482-4f95-9b3b-58b3c5eafd08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2gcv6_calico-system(bd634c8d-a482-4f95-9b3b-58b3c5eafd08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:54.957013 kubelet[2669]: I0813 07:08:54.956960 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:08:54.960461 kubelet[2669]: I0813 07:08:54.960411 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:08:54.962855 kubelet[2669]: I0813 07:08:54.962527 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:08:54.963996 containerd[1586]: time="2025-08-13T07:08:54.963347084Z" level=info msg="StopPodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\"" Aug 13 07:08:54.965257 containerd[1586]: time="2025-08-13T07:08:54.965222390Z" level=info msg="Ensure that sandbox becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570 in task-service has been cleanup successfully" Aug 13 07:08:54.966751 containerd[1586]: time="2025-08-13T07:08:54.966586353Z" level=info msg="StopPodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\"" Aug 13 07:08:54.967214 containerd[1586]: time="2025-08-13T07:08:54.966966768Z" level=info msg="StopPodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\"" Aug 13 07:08:54.967303 containerd[1586]: time="2025-08-13T07:08:54.967211351Z" level=info msg="Ensure that sandbox 11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b in task-service has been cleanup successfully" Aug 13 07:08:54.968124 containerd[1586]: time="2025-08-13T07:08:54.968101003Z" level=info msg="Ensure that sandbox 5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde in task-service has been cleanup successfully" Aug 13 07:08:54.972742 kubelet[2669]: I0813 07:08:54.972590 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:08:54.973508 containerd[1586]: time="2025-08-13T07:08:54.973303617Z" level=info msg="StopPodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\"" Aug 13 07:08:54.975255 containerd[1586]: time="2025-08-13T07:08:54.975152321Z" level=info msg="Ensure that sandbox c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334 in task-service has been cleanup successfully" Aug 13 07:08:54.980806 kubelet[2669]: I0813 07:08:54.980747 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:08:54.988179 containerd[1586]: time="2025-08-13T07:08:54.986988790Z" level=info msg="StopPodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\"" Aug 13 07:08:54.988179 containerd[1586]: time="2025-08-13T07:08:54.988068155Z" level=info msg="Ensure that sandbox 28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605 in task-service has been cleanup successfully" Aug 13 07:08:55.001705 kubelet[2669]: I0813 07:08:55.001599 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:08:55.012758 containerd[1586]: time="2025-08-13T07:08:55.012383299Z" level=info msg="StopPodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\"" Aug 13 07:08:55.021358 containerd[1586]: time="2025-08-13T07:08:55.021084598Z" level=info msg="Ensure that sandbox 5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1 in task-service has been cleanup successfully" Aug 13 07:08:55.036048 kubelet[2669]: I0813 07:08:55.035677 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:08:55.041359 containerd[1586]: time="2025-08-13T07:08:55.041058858Z" level=info msg="StopPodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\"" Aug 13 07:08:55.041359 containerd[1586]: time="2025-08-13T07:08:55.041247096Z" level=info msg="Ensure that sandbox 642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04 in task-service has been cleanup successfully" Aug 13 07:08:55.052100 kubelet[2669]: I0813 07:08:55.050045 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:08:55.052812 containerd[1586]: time="2025-08-13T07:08:55.052535747Z" level=info msg="StopPodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\"" Aug 13 07:08:55.055198 containerd[1586]: time="2025-08-13T07:08:55.054998564Z" level=info msg="Ensure that sandbox 451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c in task-service has been cleanup successfully" Aug 13 07:08:55.057210 kubelet[2669]: I0813 07:08:55.057185 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:08:55.062958 containerd[1586]: time="2025-08-13T07:08:55.062908804Z" level=info msg="StopPodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\"" Aug 13 07:08:55.063124 containerd[1586]: time="2025-08-13T07:08:55.063097659Z" level=info msg="Ensure that sandbox c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba in task-service has been cleanup successfully" Aug 13 07:08:55.159406 containerd[1586]: time="2025-08-13T07:08:55.159045047Z" level=error msg="StopPodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" failed" error="failed to destroy network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.160086 kubelet[2669]: E0813 07:08:55.160043 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:08:55.160343 kubelet[2669]: E0813 07:08:55.160230 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde"} Aug 13 07:08:55.160343 kubelet[2669]: E0813 07:08:55.160309 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30df6160-80aa-4f0d-92aa-0f0db6a04acd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.160865 kubelet[2669]: E0813 07:08:55.160332 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30df6160-80aa-4f0d-92aa-0f0db6a04acd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-v27pf" podUID="30df6160-80aa-4f0d-92aa-0f0db6a04acd" Aug 13 07:08:55.168863 containerd[1586]: time="2025-08-13T07:08:55.168809884Z" level=error msg="StopPodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" failed" error="failed to destroy network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.169395 kubelet[2669]: E0813 07:08:55.169340 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:08:55.169515 kubelet[2669]: E0813 07:08:55.169414 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570"} Aug 13 07:08:55.169515 kubelet[2669]: E0813 07:08:55.169452 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c568768c-f9a9-47ff-bd2e-11cbdcfd7596\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.169515 kubelet[2669]: E0813 07:08:55.169474 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c568768c-f9a9-47ff-bd2e-11cbdcfd7596\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" podUID="c568768c-f9a9-47ff-bd2e-11cbdcfd7596" Aug 13 07:08:55.185137 containerd[1586]: time="2025-08-13T07:08:55.185079909Z" level=error msg="StopPodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" failed" error="failed to destroy network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.185789 kubelet[2669]: E0813 07:08:55.185733 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:08:55.185891 kubelet[2669]: E0813 07:08:55.185807 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b"} Aug 13 07:08:55.185891 kubelet[2669]: E0813 07:08:55.185848 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe78265f-f9af-4623-a337-884c31c36ef2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.186022 kubelet[2669]: E0813 07:08:55.185887 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe78265f-f9af-4623-a337-884c31c36ef2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ctkfn" podUID="fe78265f-f9af-4623-a337-884c31c36ef2" Aug 13 07:08:55.187386 containerd[1586]: time="2025-08-13T07:08:55.187312249Z" level=error msg="StopPodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" failed" error="failed to destroy network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.187990 kubelet[2669]: E0813 07:08:55.187830 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:08:55.187990 kubelet[2669]: E0813 07:08:55.187890 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605"} Aug 13 07:08:55.187990 kubelet[2669]: E0813 07:08:55.187927 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.187990 kubelet[2669]: E0813 07:08:55.187949 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b967454b-hf5vt" podUID="64736ca2-7b5a-4fe7-b839-c4d57531ab36" Aug 13 07:08:55.201129 containerd[1586]: time="2025-08-13T07:08:55.200929622Z" level=error msg="StopPodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" failed" error="failed to destroy network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.201661 kubelet[2669]: E0813 07:08:55.201257 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:08:55.201661 kubelet[2669]: E0813 07:08:55.201328 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334"} Aug 13 07:08:55.201661 kubelet[2669]: E0813 07:08:55.201378 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0685533-643f-4d9d-85a8-1d45cf68c77e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.201661 kubelet[2669]: E0813 07:08:55.201414 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0685533-643f-4d9d-85a8-1d45cf68c77e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" podUID="f0685533-643f-4d9d-85a8-1d45cf68c77e" Aug 13 07:08:55.230762 containerd[1586]: time="2025-08-13T07:08:55.229540308Z" level=error msg="StopPodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" failed" error="failed to destroy network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.232664 kubelet[2669]: E0813 07:08:55.229804 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:08:55.232664 kubelet[2669]: E0813 07:08:55.229862 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1"} Aug 13 07:08:55.232664 kubelet[2669]: E0813 07:08:55.229896 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72e07df5-008f-4b2d-94ac-56f5a048d8f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.232664 kubelet[2669]: E0813 07:08:55.229918 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72e07df5-008f-4b2d-94ac-56f5a048d8f4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5njd8" podUID="72e07df5-008f-4b2d-94ac-56f5a048d8f4" Aug 13 07:08:55.235835 containerd[1586]: time="2025-08-13T07:08:55.235364756Z" level=error msg="StopPodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" failed" error="failed to destroy network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.235835 containerd[1586]: time="2025-08-13T07:08:55.235743078Z" level=error msg="StopPodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" failed" error="failed to destroy network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.235959 kubelet[2669]: E0813 07:08:55.235677 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:08:55.235959 kubelet[2669]: E0813 07:08:55.235734 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c"} Aug 13 07:08:55.235959 kubelet[2669]: E0813 07:08:55.235767 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99077a63-9db7-4cec-a6a2-af9cb28b57de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.235959 kubelet[2669]: E0813 07:08:55.235805 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99077a63-9db7-4cec-a6a2-af9cb28b57de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" podUID="99077a63-9db7-4cec-a6a2-af9cb28b57de" Aug 13 07:08:55.237039 kubelet[2669]: E0813 07:08:55.236672 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:08:55.237039 kubelet[2669]: E0813 07:08:55.236734 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04"} Aug 13 07:08:55.237039 kubelet[2669]: E0813 07:08:55.236763 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.237039 kubelet[2669]: E0813 07:08:55.236828 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd634c8d-a482-4f95-9b3b-58b3c5eafd08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2gcv6" podUID="bd634c8d-a482-4f95-9b3b-58b3c5eafd08" Aug 13 07:08:55.237308 containerd[1586]: time="2025-08-13T07:08:55.236388564Z" level=error msg="StopPodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" failed" error="failed to destroy network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:08:55.237580 kubelet[2669]: E0813 07:08:55.237461 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:08:55.237580 kubelet[2669]: E0813 07:08:55.237505 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba"} Aug 13 07:08:55.237580 kubelet[2669]: E0813 07:08:55.237533 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20826453-f382-41f8-a572-6376d276da48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:08:55.240063 kubelet[2669]: E0813 07:08:55.237551 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20826453-f382-41f8-a572-6376d276da48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" podUID="20826453-f382-41f8-a572-6376d276da48" Aug 13 07:08:59.865912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544678513.mount: Deactivated successfully. Aug 13 07:09:00.004144 containerd[1586]: time="2025-08-13T07:09:00.001481322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:00.007107 containerd[1586]: time="2025-08-13T07:08:59.954960167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:09:00.018687 containerd[1586]: time="2025-08-13T07:09:00.018451147Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:00.021286 containerd[1586]: time="2025-08-13T07:09:00.021233379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:00.022826 containerd[1586]: time="2025-08-13T07:09:00.022471713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.045875193s" Aug 13 07:09:00.022826 containerd[1586]: time="2025-08-13T07:09:00.022533024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:09:00.060311 containerd[1586]: time="2025-08-13T07:09:00.060231429Z" level=info msg="CreateContainer within sandbox \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:09:00.188372 containerd[1586]: time="2025-08-13T07:09:00.186147666Z" level=info msg="CreateContainer within sandbox \"dd6533cba8c0984cab314b6c899e90625670f7c07ac14532bbf6008bd174a5e5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"335e116abb9a31b849c082c1e008bf81f1db3fa635334de7515f9f0dd51016ea\"" Aug 13 07:09:00.208819 containerd[1586]: time="2025-08-13T07:09:00.208400350Z" level=info msg="StartContainer for \"335e116abb9a31b849c082c1e008bf81f1db3fa635334de7515f9f0dd51016ea\"" Aug 13 07:09:00.392872 containerd[1586]: time="2025-08-13T07:09:00.392555124Z" level=info msg="StartContainer for \"335e116abb9a31b849c082c1e008bf81f1db3fa635334de7515f9f0dd51016ea\" returns successfully" Aug 13 07:09:00.519142 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:09:00.520390 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:09:00.877538 containerd[1586]: time="2025-08-13T07:09:00.877057279Z" level=info msg="StopPodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\"" Aug 13 07:09:01.202610 kubelet[2669]: I0813 07:09:01.190627 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lpvss" podStartSLOduration=2.562476597 podStartE2EDuration="17.185209343s" podCreationTimestamp="2025-08-13 07:08:44 +0000 UTC" firstStartedPulling="2025-08-13 07:08:45.401260684 +0000 UTC m=+22.804424101" lastFinishedPulling="2025-08-13 07:09:00.023993427 +0000 UTC m=+37.427156847" observedRunningTime="2025-08-13 07:09:01.18458959 +0000 UTC m=+38.587753029" watchObservedRunningTime="2025-08-13 07:09:01.185209343 +0000 UTC m=+38.588372785" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.027 [INFO][3918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.030 [INFO][3918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" iface="eth0" netns="/var/run/netns/cni-984f34d4-5950-df2a-c9b9-80ab395f9ddd" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.031 [INFO][3918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" iface="eth0" netns="/var/run/netns/cni-984f34d4-5950-df2a-c9b9-80ab395f9ddd" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.033 [INFO][3918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" iface="eth0" netns="/var/run/netns/cni-984f34d4-5950-df2a-c9b9-80ab395f9ddd" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.033 [INFO][3918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.033 [INFO][3918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.292 [INFO][3926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.295 [INFO][3926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.296 [INFO][3926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.314 [WARNING][3926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.315 [INFO][3926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.317 [INFO][3926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:01.324978 containerd[1586]: 2025-08-13 07:09:01.321 [INFO][3918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:01.328936 containerd[1586]: time="2025-08-13T07:09:01.327051633Z" level=info msg="TearDown network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" successfully" Aug 13 07:09:01.328936 containerd[1586]: time="2025-08-13T07:09:01.327109828Z" level=info msg="StopPodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" returns successfully" Aug 13 07:09:01.331363 systemd[1]: run-netns-cni\x2d984f34d4\x2d5950\x2ddf2a\x2dc9b9\x2d80ab395f9ddd.mount: Deactivated successfully. Aug 13 07:09:01.400850 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:01.398652 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:01.402429 kubelet[2669]: I0813 07:09:01.400016 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-ca-bundle\") pod \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\" (UID: \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\") " Aug 13 07:09:01.402429 kubelet[2669]: I0813 07:09:01.400125 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csv6k\" (UniqueName: \"kubernetes.io/projected/64736ca2-7b5a-4fe7-b839-c4d57531ab36-kube-api-access-csv6k\") pod \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\" (UID: \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\") " Aug 13 07:09:01.402429 kubelet[2669]: I0813 07:09:01.400172 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-backend-key-pair\") pod \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\" (UID: \"64736ca2-7b5a-4fe7-b839-c4d57531ab36\") " Aug 13 07:09:01.398756 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:01.409841 kubelet[2669]: I0813 07:09:01.407913 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "64736ca2-7b5a-4fe7-b839-c4d57531ab36" (UID: "64736ca2-7b5a-4fe7-b839-c4d57531ab36"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:09:01.415511 kubelet[2669]: I0813 07:09:01.412995 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "64736ca2-7b5a-4fe7-b839-c4d57531ab36" (UID: "64736ca2-7b5a-4fe7-b839-c4d57531ab36"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:09:01.418175 kubelet[2669]: I0813 07:09:01.418092 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64736ca2-7b5a-4fe7-b839-c4d57531ab36-kube-api-access-csv6k" (OuterVolumeSpecName: "kube-api-access-csv6k") pod "64736ca2-7b5a-4fe7-b839-c4d57531ab36" (UID: "64736ca2-7b5a-4fe7-b839-c4d57531ab36"). InnerVolumeSpecName "kube-api-access-csv6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:09:01.420003 systemd[1]: var-lib-kubelet-pods-64736ca2\x2d7b5a\x2d4fe7\x2db839\x2dc4d57531ab36-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:09:01.426455 systemd[1]: var-lib-kubelet-pods-64736ca2\x2d7b5a\x2d4fe7\x2db839\x2dc4d57531ab36-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsv6k.mount: Deactivated successfully. Aug 13 07:09:01.500908 kubelet[2669]: I0813 07:09:01.500630 2669 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-ca-bundle\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:01.500908 kubelet[2669]: I0813 07:09:01.500686 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csv6k\" (UniqueName: \"kubernetes.io/projected/64736ca2-7b5a-4fe7-b839-c4d57531ab36-kube-api-access-csv6k\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:01.500908 kubelet[2669]: I0813 07:09:01.500702 2669 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64736ca2-7b5a-4fe7-b839-c4d57531ab36-whisker-backend-key-pair\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:02.101940 kubelet[2669]: I0813 07:09:02.100866 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:02.317152 kubelet[2669]: I0813 07:09:02.317100 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/925a508c-52c3-4ce6-ab07-b0c7dede0a8b-whisker-backend-key-pair\") pod \"whisker-57898f67b6-cpcvk\" (UID: \"925a508c-52c3-4ce6-ab07-b0c7dede0a8b\") " pod="calico-system/whisker-57898f67b6-cpcvk" Aug 13 07:09:02.320849 kubelet[2669]: I0813 07:09:02.319927 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925a508c-52c3-4ce6-ab07-b0c7dede0a8b-whisker-ca-bundle\") pod \"whisker-57898f67b6-cpcvk\" (UID: \"925a508c-52c3-4ce6-ab07-b0c7dede0a8b\") " pod="calico-system/whisker-57898f67b6-cpcvk" Aug 13 07:09:02.320849 kubelet[2669]: I0813 07:09:02.319995 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtsk5\" (UniqueName: \"kubernetes.io/projected/925a508c-52c3-4ce6-ab07-b0c7dede0a8b-kube-api-access-qtsk5\") pod \"whisker-57898f67b6-cpcvk\" (UID: \"925a508c-52c3-4ce6-ab07-b0c7dede0a8b\") " pod="calico-system/whisker-57898f67b6-cpcvk" Aug 13 07:09:02.581327 containerd[1586]: time="2025-08-13T07:09:02.580488786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57898f67b6-cpcvk,Uid:925a508c-52c3-4ce6-ab07-b0c7dede0a8b,Namespace:calico-system,Attempt:0,}" Aug 13 07:09:02.762356 kubelet[2669]: I0813 07:09:02.760546 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64736ca2-7b5a-4fe7-b839-c4d57531ab36" path="/var/lib/kubelet/pods/64736ca2-7b5a-4fe7-b839-c4d57531ab36/volumes" Aug 13 07:09:03.066998 systemd-networkd[1223]: cali38022d8fcc1: Link UP Aug 13 07:09:03.067443 systemd-networkd[1223]: cali38022d8fcc1: Gained carrier Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.750 [INFO][4035] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.794 [INFO][4035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0 whisker-57898f67b6- calico-system 925a508c-52c3-4ce6-ab07-b0c7dede0a8b 925 0 2025-08-13 07:09:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57898f67b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a whisker-57898f67b6-cpcvk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali38022d8fcc1 [] [] }} ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.794 [INFO][4035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.875 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" HandleID="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.877 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" HandleID="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-e-55e36c071a", "pod":"whisker-57898f67b6-cpcvk", "timestamp":"2025-08-13 07:09:02.875700765 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.877 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.877 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.877 [INFO][4048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.899 [INFO][4048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.924 [INFO][4048] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.949 [INFO][4048] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.957 [INFO][4048] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.965 [INFO][4048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.965 [INFO][4048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.970 [INFO][4048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44 Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.984 [INFO][4048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.995 [INFO][4048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.65/26] block=192.168.61.64/26 handle="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.996 [INFO][4048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.65/26] handle="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.997 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:03.113413 containerd[1586]: 2025-08-13 07:09:02.997 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.65/26] IPv6=[] ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" HandleID="k8s-pod-network.23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.120972 containerd[1586]: 2025-08-13 07:09:03.019 [INFO][4035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0", GenerateName:"whisker-57898f67b6-", Namespace:"calico-system", SelfLink:"", UID:"925a508c-52c3-4ce6-ab07-b0c7dede0a8b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57898f67b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"whisker-57898f67b6-cpcvk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38022d8fcc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:03.120972 containerd[1586]: 2025-08-13 07:09:03.021 [INFO][4035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.65/32] ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.120972 containerd[1586]: 2025-08-13 07:09:03.021 [INFO][4035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38022d8fcc1 ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.120972 containerd[1586]: 2025-08-13 07:09:03.073 [INFO][4035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.120972 containerd[1586]: 2025-08-13 07:09:03.077 [INFO][4035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0", GenerateName:"whisker-57898f67b6-", Namespace:"calico-system", SelfLink:"", UID:"925a508c-52c3-4ce6-ab07-b0c7dede0a8b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 9, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57898f67b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44", Pod:"whisker-57898f67b6-cpcvk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali38022d8fcc1", MAC:"56:8f:1e:e9:13:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:03.120972 containerd[1586]: 2025-08-13 07:09:03.102 [INFO][4035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44" Namespace="calico-system" Pod="whisker-57898f67b6-cpcvk" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--57898f67b6--cpcvk-eth0" Aug 13 07:09:03.218079 containerd[1586]: time="2025-08-13T07:09:03.217835711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:03.218079 containerd[1586]: time="2025-08-13T07:09:03.217951780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:03.218079 containerd[1586]: time="2025-08-13T07:09:03.217975113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:03.219022 containerd[1586]: time="2025-08-13T07:09:03.218478667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:03.305188 containerd[1586]: time="2025-08-13T07:09:03.305148118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57898f67b6-cpcvk,Uid:925a508c-52c3-4ce6-ab07-b0c7dede0a8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44\"" Aug 13 07:09:03.322314 containerd[1586]: time="2025-08-13T07:09:03.322188195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:09:03.446947 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:03.446404 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:03.446438 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:04.790512 systemd-networkd[1223]: cali38022d8fcc1: Gained IPv6LL Aug 13 07:09:04.819936 containerd[1586]: time="2025-08-13T07:09:04.818873799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:04.819936 containerd[1586]: time="2025-08-13T07:09:04.819656839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:09:04.819936 containerd[1586]: time="2025-08-13T07:09:04.819811068Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:04.822633 containerd[1586]: time="2025-08-13T07:09:04.822585798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:04.825973 containerd[1586]: time="2025-08-13T07:09:04.825813727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.503325868s" Aug 13 07:09:04.825973 containerd[1586]: time="2025-08-13T07:09:04.825871479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:09:04.829360 containerd[1586]: time="2025-08-13T07:09:04.829126529Z" level=info msg="CreateContainer within sandbox \"23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:09:04.845915 containerd[1586]: time="2025-08-13T07:09:04.845872163Z" level=info msg="CreateContainer within sandbox \"23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"85f7fc9a5f31df69351b895bc2c5e8daec89e8b7a8e3f758c69e47fcf767d4a8\"" Aug 13 07:09:04.847718 containerd[1586]: time="2025-08-13T07:09:04.847677117Z" level=info msg="StartContainer for \"85f7fc9a5f31df69351b895bc2c5e8daec89e8b7a8e3f758c69e47fcf767d4a8\"" Aug 13 07:09:04.948883 containerd[1586]: time="2025-08-13T07:09:04.948552444Z" level=info msg="StartContainer for \"85f7fc9a5f31df69351b895bc2c5e8daec89e8b7a8e3f758c69e47fcf767d4a8\" returns successfully" Aug 13 07:09:04.954499 containerd[1586]: time="2025-08-13T07:09:04.954379217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:09:05.735992 containerd[1586]: time="2025-08-13T07:09:05.735499620Z" level=info msg="StopPodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\"" Aug 13 07:09:05.735992 containerd[1586]: time="2025-08-13T07:09:05.735537415Z" level=info msg="StopPodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\"" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.831 [INFO][4212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.831 [INFO][4212] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" iface="eth0" netns="/var/run/netns/cni-1459efb1-a358-8f4b-c7d5-f5a039940585" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.835 [INFO][4212] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" iface="eth0" netns="/var/run/netns/cni-1459efb1-a358-8f4b-c7d5-f5a039940585" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.836 [INFO][4212] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" iface="eth0" netns="/var/run/netns/cni-1459efb1-a358-8f4b-c7d5-f5a039940585" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.836 [INFO][4212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.836 [INFO][4212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.884 [INFO][4228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.885 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.885 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.896 [WARNING][4228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.896 [INFO][4228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.899 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:05.911696 containerd[1586]: 2025-08-13 07:09:05.905 [INFO][4212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:05.916128 containerd[1586]: time="2025-08-13T07:09:05.913089817Z" level=info msg="TearDown network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" successfully" Aug 13 07:09:05.916128 containerd[1586]: time="2025-08-13T07:09:05.913144030Z" level=info msg="StopPodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" returns successfully" Aug 13 07:09:05.918691 systemd[1]: run-netns-cni\x2d1459efb1\x2da358\x2d8f4b\x2dc7d5\x2df5a039940585.mount: Deactivated successfully. Aug 13 07:09:05.921210 containerd[1586]: time="2025-08-13T07:09:05.920308568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6b56ccd-xwrlr,Uid:c568768c-f9a9-47ff-bd2e-11cbdcfd7596,Namespace:calico-system,Attempt:1,}" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.830 [INFO][4213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.830 [INFO][4213] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" iface="eth0" netns="/var/run/netns/cni-9dbea11b-5d75-453b-448e-e65b6d8bb696" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.832 [INFO][4213] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" iface="eth0" netns="/var/run/netns/cni-9dbea11b-5d75-453b-448e-e65b6d8bb696" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.834 [INFO][4213] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" iface="eth0" netns="/var/run/netns/cni-9dbea11b-5d75-453b-448e-e65b6d8bb696" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.834 [INFO][4213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.834 [INFO][4213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.889 [INFO][4226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.889 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.899 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.911 [WARNING][4226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.911 [INFO][4226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.916 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:05.926811 containerd[1586]: 2025-08-13 07:09:05.923 [INFO][4213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:05.932433 containerd[1586]: time="2025-08-13T07:09:05.928823689Z" level=info msg="TearDown network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" successfully" Aug 13 07:09:05.932433 containerd[1586]: time="2025-08-13T07:09:05.928866939Z" level=info msg="StopPodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" returns successfully" Aug 13 07:09:05.932554 systemd[1]: run-netns-cni\x2d9dbea11b\x2d5d75\x2d453b\x2d448e\x2de65b6d8bb696.mount: Deactivated successfully. Aug 13 07:09:05.935537 containerd[1586]: time="2025-08-13T07:09:05.932803317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-m2ftn,Uid:99077a63-9db7-4cec-a6a2-af9cb28b57de,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:09:06.169480 systemd-networkd[1223]: cali255363b998c: Link UP Aug 13 07:09:06.172443 systemd-networkd[1223]: cali255363b998c: Gained carrier Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.006 [INFO][4241] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.025 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0 calico-kube-controllers-5df6b56ccd- calico-system c568768c-f9a9-47ff-bd2e-11cbdcfd7596 942 0 2025-08-13 07:08:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5df6b56ccd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a calico-kube-controllers-5df6b56ccd-xwrlr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali255363b998c [] [] }} ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.025 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.085 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" HandleID="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.085 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" HandleID="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-e-55e36c071a", "pod":"calico-kube-controllers-5df6b56ccd-xwrlr", "timestamp":"2025-08-13 07:09:06.085304343 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.085 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.085 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.085 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.097 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.106 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.114 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.117 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.121 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.121 [INFO][4265] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.124 [INFO][4265] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8 Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.132 [INFO][4265] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.142 [INFO][4265] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.66/26] block=192.168.61.64/26 handle="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.142 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.66/26] handle="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.142 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:06.195432 containerd[1586]: 2025-08-13 07:09:06.142 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.66/26] IPv6=[] ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" HandleID="k8s-pod-network.c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.200457 containerd[1586]: 2025-08-13 07:09:06.155 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0", GenerateName:"calico-kube-controllers-5df6b56ccd-", Namespace:"calico-system", SelfLink:"", UID:"c568768c-f9a9-47ff-bd2e-11cbdcfd7596", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df6b56ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"calico-kube-controllers-5df6b56ccd-xwrlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali255363b998c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:06.200457 containerd[1586]: 2025-08-13 07:09:06.156 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.66/32] ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.200457 containerd[1586]: 2025-08-13 07:09:06.156 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali255363b998c ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.200457 containerd[1586]: 2025-08-13 07:09:06.173 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.200457 containerd[1586]: 2025-08-13 07:09:06.173 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0", GenerateName:"calico-kube-controllers-5df6b56ccd-", Namespace:"calico-system", SelfLink:"", UID:"c568768c-f9a9-47ff-bd2e-11cbdcfd7596", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df6b56ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8", Pod:"calico-kube-controllers-5df6b56ccd-xwrlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali255363b998c", MAC:"5e:e3:de:b2:b1:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:06.200457 containerd[1586]: 2025-08-13 07:09:06.192 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8" Namespace="calico-system" Pod="calico-kube-controllers-5df6b56ccd-xwrlr" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:06.274256 containerd[1586]: time="2025-08-13T07:09:06.272417562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:06.274256 containerd[1586]: time="2025-08-13T07:09:06.272484401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:06.274256 containerd[1586]: time="2025-08-13T07:09:06.272499013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:06.274256 containerd[1586]: time="2025-08-13T07:09:06.272609059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:06.280883 systemd-networkd[1223]: calibb0d6015b61: Link UP Aug 13 07:09:06.282910 systemd-networkd[1223]: calibb0d6015b61: Gained carrier Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.019 [INFO][4246] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.039 [INFO][4246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0 calico-apiserver-5b5498b48d- calico-apiserver 99077a63-9db7-4cec-a6a2-af9cb28b57de 943 0 2025-08-13 07:08:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5498b48d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a calico-apiserver-5b5498b48d-m2ftn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibb0d6015b61 [] [] }} ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.039 [INFO][4246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.103 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" HandleID="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.104 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" HandleID="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-e-55e36c071a", "pod":"calico-apiserver-5b5498b48d-m2ftn", "timestamp":"2025-08-13 07:09:06.103932987 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.104 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.143 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.143 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.200 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.212 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.220 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.223 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.227 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.227 [INFO][4271] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.231 [INFO][4271] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6 Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.243 [INFO][4271] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.258 [INFO][4271] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.67/26] block=192.168.61.64/26 handle="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.259 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.67/26] handle="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.259 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:06.321212 containerd[1586]: 2025-08-13 07:09:06.259 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.67/26] IPv6=[] ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" HandleID="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.321905 containerd[1586]: 2025-08-13 07:09:06.265 [INFO][4246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"99077a63-9db7-4cec-a6a2-af9cb28b57de", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"calico-apiserver-5b5498b48d-m2ftn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0d6015b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:06.321905 containerd[1586]: 2025-08-13 07:09:06.265 [INFO][4246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.67/32] ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.321905 containerd[1586]: 2025-08-13 07:09:06.265 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb0d6015b61 ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.321905 containerd[1586]: 2025-08-13 07:09:06.284 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.321905 containerd[1586]: 2025-08-13 07:09:06.285 [INFO][4246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"99077a63-9db7-4cec-a6a2-af9cb28b57de", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6", Pod:"calico-apiserver-5b5498b48d-m2ftn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0d6015b61", MAC:"c2:09:2b:cb:6d:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:06.321905 containerd[1586]: 2025-08-13 07:09:06.307 [INFO][4246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-m2ftn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:06.456192 containerd[1586]: time="2025-08-13T07:09:06.455817107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:06.456192 containerd[1586]: time="2025-08-13T07:09:06.455917729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:06.456925 containerd[1586]: time="2025-08-13T07:09:06.455941386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:06.462650 containerd[1586]: time="2025-08-13T07:09:06.458955424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:06.510959 containerd[1586]: time="2025-08-13T07:09:06.510911810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6b56ccd-xwrlr,Uid:c568768c-f9a9-47ff-bd2e-11cbdcfd7596,Namespace:calico-system,Attempt:1,} returns sandbox id \"c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8\"" Aug 13 07:09:06.611618 containerd[1586]: time="2025-08-13T07:09:06.611574564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-m2ftn,Uid:99077a63-9db7-4cec-a6a2-af9cb28b57de,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6\"" Aug 13 07:09:06.737221 containerd[1586]: time="2025-08-13T07:09:06.737001937Z" level=info msg="StopPodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\"" Aug 13 07:09:06.738112 containerd[1586]: time="2025-08-13T07:09:06.737924451Z" level=info msg="StopPodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\"" Aug 13 07:09:06.743648 containerd[1586]: time="2025-08-13T07:09:06.737048197Z" level=info msg="StopPodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\"" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.885 [INFO][4430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.892 [INFO][4430] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" iface="eth0" netns="/var/run/netns/cni-ece8a031-8302-d884-0282-ad896f624970" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.893 [INFO][4430] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" iface="eth0" netns="/var/run/netns/cni-ece8a031-8302-d884-0282-ad896f624970" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.897 [INFO][4430] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" iface="eth0" netns="/var/run/netns/cni-ece8a031-8302-d884-0282-ad896f624970" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.898 [INFO][4430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.899 [INFO][4430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.960 [INFO][4459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.960 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.960 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.969 [WARNING][4459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.969 [INFO][4459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.972 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:06.980848 containerd[1586]: 2025-08-13 07:09:06.976 [INFO][4430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:06.985094 containerd[1586]: time="2025-08-13T07:09:06.984929179Z" level=info msg="TearDown network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" successfully" Aug 13 07:09:06.985094 containerd[1586]: time="2025-08-13T07:09:06.984980755Z" level=info msg="StopPodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" returns successfully" Aug 13 07:09:06.987711 systemd[1]: run-netns-cni\x2dece8a031\x2d8302\x2dd884\x2d0282\x2dad896f624970.mount: Deactivated successfully. Aug 13 07:09:06.991101 containerd[1586]: time="2025-08-13T07:09:06.991066982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-ns55v,Uid:f0685533-643f-4d9d-85a8-1d45cf68c77e,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.889 [INFO][4429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.889 [INFO][4429] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" iface="eth0" netns="/var/run/netns/cni-60f6c7e0-d948-8406-b82c-3f2380323808" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.890 [INFO][4429] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" iface="eth0" netns="/var/run/netns/cni-60f6c7e0-d948-8406-b82c-3f2380323808" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.893 [INFO][4429] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" iface="eth0" netns="/var/run/netns/cni-60f6c7e0-d948-8406-b82c-3f2380323808" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.893 [INFO][4429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.893 [INFO][4429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.960 [INFO][4454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.960 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.974 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.994 [WARNING][4454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.994 [INFO][4454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.997 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:07.007454 containerd[1586]: 2025-08-13 07:09:06.999 [INFO][4429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:07.011298 containerd[1586]: time="2025-08-13T07:09:07.010858448Z" level=info msg="TearDown network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" successfully" Aug 13 07:09:07.011298 containerd[1586]: time="2025-08-13T07:09:07.010890674Z" level=info msg="StopPodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" returns successfully" Aug 13 07:09:07.013724 systemd[1]: run-netns-cni\x2d60f6c7e0\x2dd948\x2d8406\x2db82c\x2d3f2380323808.mount: Deactivated successfully. Aug 13 07:09:07.017126 containerd[1586]: time="2025-08-13T07:09:07.017094735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c4567fdc-7fml2,Uid:20826453-f382-41f8-a572-6376d276da48,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.850 [INFO][4423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.850 [INFO][4423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" iface="eth0" netns="/var/run/netns/cni-37dd7541-93fb-a876-55a6-b59282ae4f55" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.850 [INFO][4423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" iface="eth0" netns="/var/run/netns/cni-37dd7541-93fb-a876-55a6-b59282ae4f55" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.858 [INFO][4423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" iface="eth0" netns="/var/run/netns/cni-37dd7541-93fb-a876-55a6-b59282ae4f55" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.858 [INFO][4423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.858 [INFO][4423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.995 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.995 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:06.997 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:07.018 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:07.018 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:07.022 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:07.038455 containerd[1586]: 2025-08-13 07:09:07.030 [INFO][4423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:07.039697 containerd[1586]: time="2025-08-13T07:09:07.039589991Z" level=info msg="TearDown network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" successfully" Aug 13 07:09:07.039697 containerd[1586]: time="2025-08-13T07:09:07.039622687Z" level=info msg="StopPodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" returns successfully" Aug 13 07:09:07.041633 containerd[1586]: time="2025-08-13T07:09:07.041529743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gcv6,Uid:bd634c8d-a482-4f95-9b3b-58b3c5eafd08,Namespace:calico-system,Attempt:1,}" Aug 13 07:09:07.385102 systemd-networkd[1223]: calie9d6eb3a1d6: Link UP Aug 13 07:09:07.387703 systemd-networkd[1223]: calie9d6eb3a1d6: Gained carrier Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.090 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.128 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0 calico-apiserver-5b5498b48d- calico-apiserver f0685533-643f-4d9d-85a8-1d45cf68c77e 958 0 2025-08-13 07:08:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5498b48d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a calico-apiserver-5b5498b48d-ns55v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie9d6eb3a1d6 [] [] }} ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.128 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.282 [INFO][4501] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" HandleID="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.283 [INFO][4501] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" HandleID="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379b10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-e-55e36c071a", "pod":"calico-apiserver-5b5498b48d-ns55v", "timestamp":"2025-08-13 07:09:07.281556288 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.283 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.283 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.283 [INFO][4501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.304 [INFO][4501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.310 [INFO][4501] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.319 [INFO][4501] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.322 [INFO][4501] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.327 [INFO][4501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.328 [INFO][4501] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.332 [INFO][4501] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43 Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.340 [INFO][4501] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.357 [INFO][4501] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.68/26] block=192.168.61.64/26 handle="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.357 [INFO][4501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.68/26] handle="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.357 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:07.422877 containerd[1586]: 2025-08-13 07:09:07.357 [INFO][4501] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.68/26] IPv6=[] ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" HandleID="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.428581 containerd[1586]: 2025-08-13 07:09:07.370 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0685533-643f-4d9d-85a8-1d45cf68c77e", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"calico-apiserver-5b5498b48d-ns55v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9d6eb3a1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:07.428581 containerd[1586]: 2025-08-13 07:09:07.372 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.68/32] ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.428581 containerd[1586]: 2025-08-13 07:09:07.372 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9d6eb3a1d6 ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.428581 containerd[1586]: 2025-08-13 07:09:07.390 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.428581 containerd[1586]: 2025-08-13 07:09:07.393 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0685533-643f-4d9d-85a8-1d45cf68c77e", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43", Pod:"calico-apiserver-5b5498b48d-ns55v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9d6eb3a1d6", MAC:"c2:5a:9c:9f:2c:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:07.428581 containerd[1586]: 2025-08-13 07:09:07.414 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Namespace="calico-apiserver" Pod="calico-apiserver-5b5498b48d-ns55v" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:07.514268 containerd[1586]: time="2025-08-13T07:09:07.513178471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:07.514268 containerd[1586]: time="2025-08-13T07:09:07.513379764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:07.514268 containerd[1586]: time="2025-08-13T07:09:07.513399191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:07.514268 containerd[1586]: time="2025-08-13T07:09:07.513812839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:07.538046 systemd-networkd[1223]: cali9f73c591efe: Link UP Aug 13 07:09:07.541647 systemd-networkd[1223]: cali9f73c591efe: Gained carrier Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.162 [INFO][4479] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.193 [INFO][4479] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0 calico-apiserver-7c4567fdc- calico-apiserver 20826453-f382-41f8-a572-6376d276da48 959 0 2025-08-13 07:08:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c4567fdc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a calico-apiserver-7c4567fdc-7fml2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f73c591efe [] [] }} ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.193 [INFO][4479] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.294 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" HandleID="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.294 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" HandleID="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-e-55e36c071a", "pod":"calico-apiserver-7c4567fdc-7fml2", "timestamp":"2025-08-13 07:09:07.294223462 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.295 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.358 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.358 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.412 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.427 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.450 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.456 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.462 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.463 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.467 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.478 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.493 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.69/26] block=192.168.61.64/26 handle="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.495 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.69/26] handle="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.495 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:07.592442 containerd[1586]: 2025-08-13 07:09:07.495 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.69/26] IPv6=[] ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" HandleID="k8s-pod-network.ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.599674 containerd[1586]: 2025-08-13 07:09:07.528 [INFO][4479] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0", GenerateName:"calico-apiserver-7c4567fdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"20826453-f382-41f8-a572-6376d276da48", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c4567fdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"calico-apiserver-7c4567fdc-7fml2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f73c591efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:07.599674 containerd[1586]: 2025-08-13 07:09:07.529 [INFO][4479] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.69/32] ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.599674 containerd[1586]: 2025-08-13 07:09:07.530 [INFO][4479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f73c591efe ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.599674 containerd[1586]: 2025-08-13 07:09:07.540 [INFO][4479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.599674 containerd[1586]: 2025-08-13 07:09:07.545 [INFO][4479] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0", GenerateName:"calico-apiserver-7c4567fdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"20826453-f382-41f8-a572-6376d276da48", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c4567fdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba", Pod:"calico-apiserver-7c4567fdc-7fml2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f73c591efe", MAC:"e6:4d:d2:37:95:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:07.599674 containerd[1586]: 2025-08-13 07:09:07.580 [INFO][4479] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba" Namespace="calico-apiserver" Pod="calico-apiserver-7c4567fdc-7fml2" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:07.738932 containerd[1586]: time="2025-08-13T07:09:07.737487253Z" level=info msg="StopPodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\"" Aug 13 07:09:07.796252 containerd[1586]: time="2025-08-13T07:09:07.791615116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:07.796252 containerd[1586]: time="2025-08-13T07:09:07.791706431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:07.796252 containerd[1586]: time="2025-08-13T07:09:07.791729301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:07.796252 containerd[1586]: time="2025-08-13T07:09:07.791904480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:07.802938 systemd-networkd[1223]: cali0b2acf9bc98: Link UP Aug 13 07:09:07.810186 systemd-networkd[1223]: cali0b2acf9bc98: Gained carrier Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.172 [INFO][4487] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.201 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0 csi-node-driver- calico-system bd634c8d-a482-4f95-9b3b-58b3c5eafd08 957 0 2025-08-13 07:08:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a csi-node-driver-2gcv6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b2acf9bc98 [] [] }} ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.201 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.319 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" HandleID="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.319 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" HandleID="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-e-55e36c071a", "pod":"csi-node-driver-2gcv6", "timestamp":"2025-08-13 07:09:07.319473432 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.319 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.501 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.502 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.552 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.582 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.612 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.641 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.659 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.659 [INFO][4515] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.691 [INFO][4515] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.731 [INFO][4515] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.754 [INFO][4515] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.70/26] block=192.168.61.64/26 handle="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.754 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.70/26] handle="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.754 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:07.869818 containerd[1586]: 2025-08-13 07:09:07.754 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.70/26] IPv6=[] ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" HandleID="k8s-pod-network.b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.871571 containerd[1586]: 2025-08-13 07:09:07.772 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd634c8d-a482-4f95-9b3b-58b3c5eafd08", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"csi-node-driver-2gcv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b2acf9bc98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:07.871571 containerd[1586]: 2025-08-13 07:09:07.772 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.70/32] ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.871571 containerd[1586]: 2025-08-13 07:09:07.772 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b2acf9bc98 ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.871571 containerd[1586]: 2025-08-13 07:09:07.812 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.871571 containerd[1586]: 2025-08-13 07:09:07.814 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd634c8d-a482-4f95-9b3b-58b3c5eafd08", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb", Pod:"csi-node-driver-2gcv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b2acf9bc98", MAC:"7e:9b:d5:fd:ab:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:07.871571 containerd[1586]: 2025-08-13 07:09:07.845 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb" Namespace="calico-system" Pod="csi-node-driver-2gcv6" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:07.940600 systemd[1]: run-netns-cni\x2d37dd7541\x2d93fb\x2da876\x2d55a6\x2db59282ae4f55.mount: Deactivated successfully. Aug 13 07:09:07.958279 containerd[1586]: time="2025-08-13T07:09:07.942818387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:07.958279 containerd[1586]: time="2025-08-13T07:09:07.942898943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:07.958279 containerd[1586]: time="2025-08-13T07:09:07.942916082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:07.958279 containerd[1586]: time="2025-08-13T07:09:07.944743576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:08.044472 containerd[1586]: time="2025-08-13T07:09:08.044295541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5498b48d-ns55v,Uid:f0685533-643f-4d9d-85a8-1d45cf68c77e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43\"" Aug 13 07:09:08.119200 systemd-networkd[1223]: cali255363b998c: Gained IPv6LL Aug 13 07:09:08.195593 containerd[1586]: time="2025-08-13T07:09:08.195517877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gcv6,Uid:bd634c8d-a482-4f95-9b3b-58b3c5eafd08,Namespace:calico-system,Attempt:1,} returns sandbox id \"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb\"" Aug 13 07:09:08.226650 containerd[1586]: time="2025-08-13T07:09:08.226593302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c4567fdc-7fml2,Uid:20826453-f382-41f8-a572-6376d276da48,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba\"" Aug 13 07:09:08.245939 systemd-networkd[1223]: calibb0d6015b61: Gained IPv6LL Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.142 [INFO][4617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.145 [INFO][4617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" iface="eth0" netns="/var/run/netns/cni-49f7a7b7-ce09-2ec0-20ce-c97399aa3db3" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.147 [INFO][4617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" iface="eth0" netns="/var/run/netns/cni-49f7a7b7-ce09-2ec0-20ce-c97399aa3db3" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.148 [INFO][4617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" iface="eth0" netns="/var/run/netns/cni-49f7a7b7-ce09-2ec0-20ce-c97399aa3db3" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.148 [INFO][4617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.149 [INFO][4617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.261 [INFO][4688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.262 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.262 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.274 [WARNING][4688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.274 [INFO][4688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.277 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:08.297824 containerd[1586]: 2025-08-13 07:09:08.287 [INFO][4617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:08.297824 containerd[1586]: time="2025-08-13T07:09:08.296647649Z" level=info msg="TearDown network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" successfully" Aug 13 07:09:08.297824 containerd[1586]: time="2025-08-13T07:09:08.296757529Z" level=info msg="StopPodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" returns successfully" Aug 13 07:09:08.301531 containerd[1586]: time="2025-08-13T07:09:08.300461751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-v27pf,Uid:30df6160-80aa-4f0d-92aa-0f0db6a04acd,Namespace:calico-system,Attempt:1,}" Aug 13 07:09:08.542319 systemd-networkd[1223]: caliac31d133648: Link UP Aug 13 07:09:08.547488 systemd-networkd[1223]: caliac31d133648: Gained carrier Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.376 [INFO][4707] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.403 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0 goldmane-58fd7646b9- calico-system 30df6160-80aa-4f0d-92aa-0f0db6a04acd 973 0 2025-08-13 07:08:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a goldmane-58fd7646b9-v27pf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliac31d133648 [] [] }} ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.403 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.464 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" HandleID="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.466 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" HandleID="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003da7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-e-55e36c071a", "pod":"goldmane-58fd7646b9-v27pf", "timestamp":"2025-08-13 07:09:08.46433372 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.466 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.466 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.466 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.478 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.488 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.500 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.506 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.512 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.512 [INFO][4721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.516 [INFO][4721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.523 [INFO][4721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.535 [INFO][4721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.71/26] block=192.168.61.64/26 handle="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.535 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.71/26] handle="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.535 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:08.573099 containerd[1586]: 2025-08-13 07:09:08.535 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.71/26] IPv6=[] ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" HandleID="k8s-pod-network.035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.575202 containerd[1586]: 2025-08-13 07:09:08.539 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"30df6160-80aa-4f0d-92aa-0f0db6a04acd", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"goldmane-58fd7646b9-v27pf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac31d133648", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:08.575202 containerd[1586]: 2025-08-13 07:09:08.539 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.71/32] ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.575202 containerd[1586]: 2025-08-13 07:09:08.539 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac31d133648 ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.575202 containerd[1586]: 2025-08-13 07:09:08.547 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.575202 containerd[1586]: 2025-08-13 07:09:08.548 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"30df6160-80aa-4f0d-92aa-0f0db6a04acd", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e", Pod:"goldmane-58fd7646b9-v27pf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac31d133648", MAC:"5e:35:da:c6:a0:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:08.575202 containerd[1586]: 2025-08-13 07:09:08.568 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e" Namespace="calico-system" Pod="goldmane-58fd7646b9-v27pf" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:08.617694 containerd[1586]: time="2025-08-13T07:09:08.615817770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:08.617694 containerd[1586]: time="2025-08-13T07:09:08.615912354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:08.617694 containerd[1586]: time="2025-08-13T07:09:08.615929404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:08.617694 containerd[1586]: time="2025-08-13T07:09:08.616087248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:08.701083 containerd[1586]: time="2025-08-13T07:09:08.701017870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-v27pf,Uid:30df6160-80aa-4f0d-92aa-0f0db6a04acd,Namespace:calico-system,Attempt:1,} returns sandbox id \"035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e\"" Aug 13 07:09:08.705859 containerd[1586]: time="2025-08-13T07:09:08.705728262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:08.707211 containerd[1586]: time="2025-08-13T07:09:08.707151600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:09:08.708659 containerd[1586]: time="2025-08-13T07:09:08.708480872Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:08.714228 containerd[1586]: time="2025-08-13T07:09:08.714159576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:08.716459 containerd[1586]: time="2025-08-13T07:09:08.715524931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.761093871s" Aug 13 07:09:08.716459 containerd[1586]: time="2025-08-13T07:09:08.716023650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:09:08.718229 containerd[1586]: time="2025-08-13T07:09:08.717879124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:09:08.726356 containerd[1586]: time="2025-08-13T07:09:08.726184958Z" level=info msg="CreateContainer within sandbox \"23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:09:08.737038 containerd[1586]: time="2025-08-13T07:09:08.736905759Z" level=info msg="CreateContainer within sandbox \"23527dd639baa61fa4e2eb59dcf0bea785e81a927f3db0d360927242cef82e44\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2feb08bef6d413e46ba801d7824fb5356cbc4735afb448d241c8f19e4a02d025\"" Aug 13 07:09:08.739395 containerd[1586]: time="2025-08-13T07:09:08.737972526Z" level=info msg="StartContainer for \"2feb08bef6d413e46ba801d7824fb5356cbc4735afb448d241c8f19e4a02d025\"" Aug 13 07:09:08.753630 kubelet[2669]: I0813 07:09:08.752159 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:08.931415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548373745.mount: Deactivated successfully. Aug 13 07:09:08.933091 systemd[1]: run-netns-cni\x2d49f7a7b7\x2dce09\x2d2ec0\x2d20ce\x2dc97399aa3db3.mount: Deactivated successfully. Aug 13 07:09:08.953901 systemd-networkd[1223]: cali9f73c591efe: Gained IPv6LL Aug 13 07:09:08.965055 containerd[1586]: time="2025-08-13T07:09:08.964968411Z" level=info msg="StartContainer for \"2feb08bef6d413e46ba801d7824fb5356cbc4735afb448d241c8f19e4a02d025\" returns successfully" Aug 13 07:09:09.014042 systemd-networkd[1223]: calie9d6eb3a1d6: Gained IPv6LL Aug 13 07:09:09.846000 systemd-networkd[1223]: caliac31d133648: Gained IPv6LL Aug 13 07:09:09.846421 systemd-networkd[1223]: cali0b2acf9bc98: Gained IPv6LL Aug 13 07:09:10.737620 containerd[1586]: time="2025-08-13T07:09:10.737281771Z" level=info msg="StopPodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\"" Aug 13 07:09:10.738755 containerd[1586]: time="2025-08-13T07:09:10.738466331Z" level=info msg="StopPodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\"" Aug 13 07:09:10.842803 kubelet[2669]: I0813 07:09:10.842478 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57898f67b6-cpcvk" podStartSLOduration=3.433272583 podStartE2EDuration="8.842449769s" podCreationTimestamp="2025-08-13 07:09:02 +0000 UTC" firstStartedPulling="2025-08-13 07:09:03.308419793 +0000 UTC m=+40.711583222" lastFinishedPulling="2025-08-13 07:09:08.717596991 +0000 UTC m=+46.120760408" observedRunningTime="2025-08-13 07:09:09.214947197 +0000 UTC m=+46.618110634" watchObservedRunningTime="2025-08-13 07:09:10.842449769 +0000 UTC m=+48.245613206" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.841 [INFO][4928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.845 [INFO][4928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" iface="eth0" netns="/var/run/netns/cni-98a825f2-876d-7fbe-5e74-390fb8bc6dee" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.846 [INFO][4928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" iface="eth0" netns="/var/run/netns/cni-98a825f2-876d-7fbe-5e74-390fb8bc6dee" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.848 [INFO][4928] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" iface="eth0" netns="/var/run/netns/cni-98a825f2-876d-7fbe-5e74-390fb8bc6dee" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.848 [INFO][4928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.848 [INFO][4928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.926 [INFO][4939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.926 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.927 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.936 [WARNING][4939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.936 [INFO][4939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.940 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:10.956319 containerd[1586]: 2025-08-13 07:09:10.950 [INFO][4928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:10.959845 containerd[1586]: time="2025-08-13T07:09:10.957688926Z" level=info msg="TearDown network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" successfully" Aug 13 07:09:10.959845 containerd[1586]: time="2025-08-13T07:09:10.957879624Z" level=info msg="StopPodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" returns successfully" Aug 13 07:09:10.961963 systemd[1]: run-netns-cni\x2d98a825f2\x2d876d\x2d7fbe\x2d5e74\x2d390fb8bc6dee.mount: Deactivated successfully. Aug 13 07:09:10.973829 kubelet[2669]: E0813 07:09:10.972868 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:10.973967 containerd[1586]: time="2025-08-13T07:09:10.973425933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ctkfn,Uid:fe78265f-f9af-4623-a337-884c31c36ef2,Namespace:kube-system,Attempt:1,}" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.837 [INFO][4924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.840 [INFO][4924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" iface="eth0" netns="/var/run/netns/cni-140a54c9-6cbc-5589-5efa-d5a49696ba49" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.840 [INFO][4924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" iface="eth0" netns="/var/run/netns/cni-140a54c9-6cbc-5589-5efa-d5a49696ba49" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.840 [INFO][4924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" iface="eth0" netns="/var/run/netns/cni-140a54c9-6cbc-5589-5efa-d5a49696ba49" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.840 [INFO][4924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.840 [INFO][4924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.934 [INFO][4937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.935 [INFO][4937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.940 [INFO][4937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.960 [WARNING][4937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.960 [INFO][4937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.963 [INFO][4937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:10.978500 containerd[1586]: 2025-08-13 07:09:10.975 [INFO][4924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:10.986459 containerd[1586]: time="2025-08-13T07:09:10.979139544Z" level=info msg="TearDown network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" successfully" Aug 13 07:09:10.986459 containerd[1586]: time="2025-08-13T07:09:10.979175600Z" level=info msg="StopPodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" returns successfully" Aug 13 07:09:10.986459 containerd[1586]: time="2025-08-13T07:09:10.982513011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5njd8,Uid:72e07df5-008f-4b2d-94ac-56f5a048d8f4,Namespace:kube-system,Attempt:1,}" Aug 13 07:09:10.986547 kubelet[2669]: E0813 07:09:10.979544 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:10.986758 systemd[1]: run-netns-cni\x2d140a54c9\x2d6cbc\x2d5589\x2d5efa\x2dd5a49696ba49.mount: Deactivated successfully. Aug 13 07:09:11.307568 systemd-networkd[1223]: cali4108fb091d7: Link UP Aug 13 07:09:11.307883 systemd-networkd[1223]: cali4108fb091d7: Gained carrier Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.088 [INFO][4961] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.116 [INFO][4961] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0 coredns-7c65d6cfc9- kube-system 72e07df5-008f-4b2d-94ac-56f5a048d8f4 997 0 2025-08-13 07:08:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a coredns-7c65d6cfc9-5njd8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4108fb091d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.116 [INFO][4961] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.207 [INFO][4979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" HandleID="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.208 [INFO][4979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" HandleID="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5da0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-e-55e36c071a", "pod":"coredns-7c65d6cfc9-5njd8", "timestamp":"2025-08-13 07:09:11.207394008 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.208 [INFO][4979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.208 [INFO][4979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.208 [INFO][4979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.226 [INFO][4979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.244 [INFO][4979] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.254 [INFO][4979] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.257 [INFO][4979] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.261 [INFO][4979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.261 [INFO][4979] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.266 [INFO][4979] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.274 [INFO][4979] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.285 [INFO][4979] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.72/26] block=192.168.61.64/26 handle="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.285 [INFO][4979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.72/26] handle="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.285 [INFO][4979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:11.324986 containerd[1586]: 2025-08-13 07:09:11.285 [INFO][4979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.72/26] IPv6=[] ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" HandleID="k8s-pod-network.0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.326045 containerd[1586]: 2025-08-13 07:09:11.290 [INFO][4961] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"72e07df5-008f-4b2d-94ac-56f5a048d8f4", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"coredns-7c65d6cfc9-5njd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4108fb091d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:11.326045 containerd[1586]: 2025-08-13 07:09:11.290 [INFO][4961] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.72/32] ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.326045 containerd[1586]: 2025-08-13 07:09:11.290 [INFO][4961] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4108fb091d7 ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.326045 containerd[1586]: 2025-08-13 07:09:11.303 [INFO][4961] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.326045 containerd[1586]: 2025-08-13 07:09:11.304 [INFO][4961] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"72e07df5-008f-4b2d-94ac-56f5a048d8f4", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f", Pod:"coredns-7c65d6cfc9-5njd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4108fb091d7", MAC:"4e:26:7e:42:af:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:11.326045 containerd[1586]: 2025-08-13 07:09:11.319 [INFO][4961] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5njd8" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:11.422247 systemd-networkd[1223]: cali6d7ee94f077: Link UP Aug 13 07:09:11.422571 systemd-networkd[1223]: cali6d7ee94f077: Gained carrier Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.121 [INFO][4950] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.155 [INFO][4950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0 coredns-7c65d6cfc9- kube-system fe78265f-f9af-4623-a337-884c31c36ef2 996 0 2025-08-13 07:08:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-e-55e36c071a coredns-7c65d6cfc9-ctkfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6d7ee94f077 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.155 [INFO][4950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.269 [INFO][4985] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" HandleID="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.269 [INFO][4985] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" HandleID="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001f38c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-e-55e36c071a", "pod":"coredns-7c65d6cfc9-ctkfn", "timestamp":"2025-08-13 07:09:11.269435997 +0000 UTC"}, Hostname:"ci-4081.3.5-e-55e36c071a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.269 [INFO][4985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.285 [INFO][4985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.285 [INFO][4985] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-e-55e36c071a' Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.327 [INFO][4985] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.348 [INFO][4985] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.357 [INFO][4985] ipam/ipam.go 511: Trying affinity for 192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.360 [INFO][4985] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.364 [INFO][4985] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.64/26 host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.364 [INFO][4985] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.61.64/26 handle="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.367 [INFO][4985] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7 Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.384 [INFO][4985] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.61.64/26 handle="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.398 [INFO][4985] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.61.73/26] block=192.168.61.64/26 handle="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.398 [INFO][4985] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.73/26] handle="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" host="ci-4081.3.5-e-55e36c071a" Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.399 [INFO][4985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:11.467379 containerd[1586]: 2025-08-13 07:09:11.399 [INFO][4985] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.61.73/26] IPv6=[] ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" HandleID="k8s-pod-network.6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.468109 containerd[1586]: 2025-08-13 07:09:11.414 [INFO][4950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe78265f-f9af-4623-a337-884c31c36ef2", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"", Pod:"coredns-7c65d6cfc9-ctkfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d7ee94f077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:11.468109 containerd[1586]: 2025-08-13 07:09:11.415 [INFO][4950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.73/32] ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.468109 containerd[1586]: 2025-08-13 07:09:11.415 [INFO][4950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d7ee94f077 ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.468109 containerd[1586]: 2025-08-13 07:09:11.423 [INFO][4950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.468109 containerd[1586]: 2025-08-13 07:09:11.425 [INFO][4950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe78265f-f9af-4623-a337-884c31c36ef2", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7", Pod:"coredns-7c65d6cfc9-ctkfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d7ee94f077", MAC:"76:03:08:3b:42:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:11.468109 containerd[1586]: 2025-08-13 07:09:11.454 [INFO][4950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ctkfn" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:11.497819 containerd[1586]: time="2025-08-13T07:09:11.494345392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:11.497819 containerd[1586]: time="2025-08-13T07:09:11.495676803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:11.497819 containerd[1586]: time="2025-08-13T07:09:11.495717096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:11.497819 containerd[1586]: time="2025-08-13T07:09:11.496447663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:11.558971 containerd[1586]: time="2025-08-13T07:09:11.558241879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:09:11.558971 containerd[1586]: time="2025-08-13T07:09:11.558313249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:09:11.558971 containerd[1586]: time="2025-08-13T07:09:11.558343331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:11.561030 containerd[1586]: time="2025-08-13T07:09:11.558615098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:09:11.678985 containerd[1586]: time="2025-08-13T07:09:11.678864491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5njd8,Uid:72e07df5-008f-4b2d-94ac-56f5a048d8f4,Namespace:kube-system,Attempt:1,} returns sandbox id \"0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f\"" Aug 13 07:09:11.681420 kubelet[2669]: E0813 07:09:11.681146 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:11.692342 containerd[1586]: time="2025-08-13T07:09:11.691859467Z" level=info msg="CreateContainer within sandbox \"0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:09:11.714658 containerd[1586]: time="2025-08-13T07:09:11.714613099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ctkfn,Uid:fe78265f-f9af-4623-a337-884c31c36ef2,Namespace:kube-system,Attempt:1,} returns sandbox id \"6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7\"" Aug 13 07:09:11.716122 kubelet[2669]: E0813 07:09:11.716086 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:11.727456 containerd[1586]: time="2025-08-13T07:09:11.726910517Z" level=info msg="CreateContainer within sandbox \"6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:09:11.752622 containerd[1586]: time="2025-08-13T07:09:11.750861515Z" level=info msg="CreateContainer within sandbox \"0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed8149edd7214bb65e7847787bcd9e2535de9b7c0ffaa927a427748651e33c24\"" Aug 13 07:09:11.757883 containerd[1586]: time="2025-08-13T07:09:11.755998159Z" level=info msg="StartContainer for \"ed8149edd7214bb65e7847787bcd9e2535de9b7c0ffaa927a427748651e33c24\"" Aug 13 07:09:11.776876 containerd[1586]: time="2025-08-13T07:09:11.776818364Z" level=info msg="CreateContainer within sandbox \"6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"102616def86e44a76de4b27e5039353b7155483b51a90736c929b447d37a7005\"" Aug 13 07:09:11.777767 containerd[1586]: time="2025-08-13T07:09:11.777738622Z" level=info msg="StartContainer for \"102616def86e44a76de4b27e5039353b7155483b51a90736c929b447d37a7005\"" Aug 13 07:09:11.878751 containerd[1586]: time="2025-08-13T07:09:11.878545689Z" level=info msg="StartContainer for \"ed8149edd7214bb65e7847787bcd9e2535de9b7c0ffaa927a427748651e33c24\" returns successfully" Aug 13 07:09:11.902904 containerd[1586]: time="2025-08-13T07:09:11.902571761Z" level=info msg="StartContainer for \"102616def86e44a76de4b27e5039353b7155483b51a90736c929b447d37a7005\" returns successfully" Aug 13 07:09:12.261958 kubelet[2669]: E0813 07:09:12.260449 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:12.273807 kubelet[2669]: E0813 07:09:12.273322 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:12.286507 kubelet[2669]: I0813 07:09:12.286446 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5njd8" podStartSLOduration=43.286425036 podStartE2EDuration="43.286425036s" podCreationTimestamp="2025-08-13 07:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:12.28609827 +0000 UTC m=+49.689261707" watchObservedRunningTime="2025-08-13 07:09:12.286425036 +0000 UTC m=+49.689588473" Aug 13 07:09:12.339534 containerd[1586]: time="2025-08-13T07:09:12.337918396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.339534 containerd[1586]: time="2025-08-13T07:09:12.338629414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:09:12.339534 containerd[1586]: time="2025-08-13T07:09:12.339505046Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.345045 kubelet[2669]: I0813 07:09:12.343946 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ctkfn" podStartSLOduration=43.343453346 podStartE2EDuration="43.343453346s" podCreationTimestamp="2025-08-13 07:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:09:12.336406588 +0000 UTC m=+49.739570026" watchObservedRunningTime="2025-08-13 07:09:12.343453346 +0000 UTC m=+49.746616785" Aug 13 07:09:12.362519 containerd[1586]: time="2025-08-13T07:09:12.362469717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:12.364223 containerd[1586]: time="2025-08-13T07:09:12.364082053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.64567728s" Aug 13 07:09:12.364223 containerd[1586]: time="2025-08-13T07:09:12.364123021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:09:12.367817 containerd[1586]: time="2025-08-13T07:09:12.366055591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:09:12.401717 containerd[1586]: time="2025-08-13T07:09:12.401343868Z" level=info msg="CreateContainer within sandbox \"c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:09:12.417681 containerd[1586]: time="2025-08-13T07:09:12.416465025Z" level=info msg="CreateContainer within sandbox \"c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"984ad3b13aabce16ca68b7754445d33bf462b5b7b9fbb4daa9dcce2d963b60dd\"" Aug 13 07:09:12.422814 containerd[1586]: time="2025-08-13T07:09:12.422459995Z" level=info msg="StartContainer for \"984ad3b13aabce16ca68b7754445d33bf462b5b7b9fbb4daa9dcce2d963b60dd\"" Aug 13 07:09:12.648739 containerd[1586]: time="2025-08-13T07:09:12.648178803Z" level=info msg="StartContainer for \"984ad3b13aabce16ca68b7754445d33bf462b5b7b9fbb4daa9dcce2d963b60dd\" returns successfully" Aug 13 07:09:12.918021 systemd-networkd[1223]: cali4108fb091d7: Gained IPv6LL Aug 13 07:09:13.109988 systemd-networkd[1223]: cali6d7ee94f077: Gained IPv6LL Aug 13 07:09:13.278229 kubelet[2669]: E0813 07:09:13.276563 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:13.278229 kubelet[2669]: E0813 07:09:13.277909 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:13.354280 kubelet[2669]: I0813 07:09:13.354053 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5df6b56ccd-xwrlr" podStartSLOduration=22.502766399 podStartE2EDuration="28.353852479s" podCreationTimestamp="2025-08-13 07:08:45 +0000 UTC" firstStartedPulling="2025-08-13 07:09:06.514203954 +0000 UTC m=+43.917367371" lastFinishedPulling="2025-08-13 07:09:12.365290022 +0000 UTC m=+49.768453451" observedRunningTime="2025-08-13 07:09:13.300088118 +0000 UTC m=+50.703251557" watchObservedRunningTime="2025-08-13 07:09:13.353852479 +0000 UTC m=+50.757015917" Aug 13 07:09:13.432593 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:13.429920 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:13.429966 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:13.936271 kubelet[2669]: I0813 07:09:13.936091 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:13.945411 kubelet[2669]: E0813 07:09:13.943859 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:14.288228 kubelet[2669]: E0813 07:09:14.286343 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:14.288228 kubelet[2669]: E0813 07:09:14.287327 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:14.289471 kubelet[2669]: E0813 07:09:14.288996 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:14.829815 kernel: bpftool[5335]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:09:15.303887 systemd-networkd[1223]: vxlan.calico: Link UP Aug 13 07:09:15.303897 systemd-networkd[1223]: vxlan.calico: Gained carrier Aug 13 07:09:15.478835 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:15.478891 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:15.478899 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:15.766861 containerd[1586]: time="2025-08-13T07:09:15.766074919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:15.769911 containerd[1586]: time="2025-08-13T07:09:15.769850064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:09:15.781971 containerd[1586]: time="2025-08-13T07:09:15.781842216Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:15.786340 containerd[1586]: time="2025-08-13T07:09:15.785801168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:15.789351 containerd[1586]: time="2025-08-13T07:09:15.788122763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.422024895s" Aug 13 07:09:15.789351 containerd[1586]: time="2025-08-13T07:09:15.788172788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:09:15.812194 containerd[1586]: time="2025-08-13T07:09:15.811579220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:09:15.813114 containerd[1586]: time="2025-08-13T07:09:15.813082212Z" level=info msg="CreateContainer within sandbox \"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:09:15.841629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322589952.mount: Deactivated successfully. Aug 13 07:09:15.854409 containerd[1586]: time="2025-08-13T07:09:15.854273316Z" level=info msg="CreateContainer within sandbox \"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\"" Aug 13 07:09:15.860302 containerd[1586]: time="2025-08-13T07:09:15.857001923Z" level=info msg="StartContainer for \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\"" Aug 13 07:09:16.018262 containerd[1586]: time="2025-08-13T07:09:16.017713437Z" level=info msg="StartContainer for \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\" returns successfully" Aug 13 07:09:16.231702 containerd[1586]: time="2025-08-13T07:09:16.231576088Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:16.236814 containerd[1586]: time="2025-08-13T07:09:16.236482411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:09:16.242827 containerd[1586]: time="2025-08-13T07:09:16.242768086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 431.133583ms" Aug 13 07:09:16.243128 containerd[1586]: time="2025-08-13T07:09:16.243010234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:09:16.248255 containerd[1586]: time="2025-08-13T07:09:16.246679361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:09:16.254388 containerd[1586]: time="2025-08-13T07:09:16.254135995Z" level=info msg="CreateContainer within sandbox \"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:09:16.312905 containerd[1586]: time="2025-08-13T07:09:16.311287689Z" level=info msg="CreateContainer within sandbox \"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\"" Aug 13 07:09:16.321170 containerd[1586]: time="2025-08-13T07:09:16.315777310Z" level=info msg="StartContainer for \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\"" Aug 13 07:09:16.381677 kubelet[2669]: I0813 07:09:16.379739 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5498b48d-m2ftn" podStartSLOduration=26.156135265 podStartE2EDuration="35.341944987s" podCreationTimestamp="2025-08-13 07:08:41 +0000 UTC" firstStartedPulling="2025-08-13 07:09:06.614348333 +0000 UTC m=+44.017511750" lastFinishedPulling="2025-08-13 07:09:15.800158038 +0000 UTC m=+53.203321472" observedRunningTime="2025-08-13 07:09:16.339613796 +0000 UTC m=+53.742777230" watchObservedRunningTime="2025-08-13 07:09:16.341944987 +0000 UTC m=+53.745108418" Aug 13 07:09:16.474014 containerd[1586]: time="2025-08-13T07:09:16.473972212Z" level=info msg="StartContainer for \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\" returns successfully" Aug 13 07:09:16.924635 systemd[1]: Started sshd@7-165.232.152.216:22-139.178.89.65:49110.service - OpenSSH per-connection server daemon (139.178.89.65:49110). Aug 13 07:09:16.955718 systemd-networkd[1223]: vxlan.calico: Gained IPv6LL Aug 13 07:09:17.187950 sshd[5503]: Accepted publickey for core from 139.178.89.65 port 49110 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:17.196346 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:17.227288 systemd-logind[1556]: New session 8 of user core. Aug 13 07:09:17.237548 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:09:17.354496 kubelet[2669]: I0813 07:09:17.354452 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:17.418872 kubelet[2669]: I0813 07:09:17.418116 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5498b48d-ns55v" podStartSLOduration=28.224134447 podStartE2EDuration="36.418093344s" podCreationTimestamp="2025-08-13 07:08:41 +0000 UTC" firstStartedPulling="2025-08-13 07:09:08.050174705 +0000 UTC m=+45.453338135" lastFinishedPulling="2025-08-13 07:09:16.244133612 +0000 UTC m=+53.647297032" observedRunningTime="2025-08-13 07:09:17.418012788 +0000 UTC m=+54.821176225" watchObservedRunningTime="2025-08-13 07:09:17.418093344 +0000 UTC m=+54.821256779" Aug 13 07:09:17.528232 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:17.529123 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:17.529174 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:18.284535 containerd[1586]: time="2025-08-13T07:09:18.283810825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:18.284535 containerd[1586]: time="2025-08-13T07:09:18.284486566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:09:18.288033 containerd[1586]: time="2025-08-13T07:09:18.285233114Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:18.290010 containerd[1586]: time="2025-08-13T07:09:18.288187105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:18.290010 containerd[1586]: time="2025-08-13T07:09:18.288575913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.041861052s" Aug 13 07:09:18.290010 containerd[1586]: time="2025-08-13T07:09:18.288604691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:09:18.292001 containerd[1586]: time="2025-08-13T07:09:18.291953244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:09:18.300326 containerd[1586]: time="2025-08-13T07:09:18.299915727Z" level=info msg="CreateContainer within sandbox \"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:09:18.384939 containerd[1586]: time="2025-08-13T07:09:18.383878751Z" level=info msg="CreateContainer within sandbox \"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"066ec0d25ba5ce90d9bd048b8379747459fea4f9bcb1f023f47ff65cdfce6752\"" Aug 13 07:09:18.402536 containerd[1586]: time="2025-08-13T07:09:18.392570597Z" level=info msg="StartContainer for \"066ec0d25ba5ce90d9bd048b8379747459fea4f9bcb1f023f47ff65cdfce6752\"" Aug 13 07:09:18.416827 kubelet[2669]: I0813 07:09:18.416773 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:18.542167 sshd[5503]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:18.564255 systemd[1]: sshd@7-165.232.152.216:22-139.178.89.65:49110.service: Deactivated successfully. Aug 13 07:09:18.586753 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:09:18.587104 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:09:18.596040 systemd-logind[1556]: Removed session 8. Aug 13 07:09:18.607824 containerd[1586]: time="2025-08-13T07:09:18.607019941Z" level=info msg="StartContainer for \"066ec0d25ba5ce90d9bd048b8379747459fea4f9bcb1f023f47ff65cdfce6752\" returns successfully" Aug 13 07:09:18.692401 containerd[1586]: time="2025-08-13T07:09:18.692289236Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:18.696822 containerd[1586]: time="2025-08-13T07:09:18.696488122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:09:18.703176 containerd[1586]: time="2025-08-13T07:09:18.703087255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 410.736622ms" Aug 13 07:09:18.703454 containerd[1586]: time="2025-08-13T07:09:18.703428763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:09:18.708961 containerd[1586]: time="2025-08-13T07:09:18.708829107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:09:18.748719 containerd[1586]: time="2025-08-13T07:09:18.748534532Z" level=info msg="CreateContainer within sandbox \"ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:09:18.778010 containerd[1586]: time="2025-08-13T07:09:18.776188646Z" level=info msg="CreateContainer within sandbox \"ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6fa06f6d76d13fe8e061c2c3da9a4e7c32dbd23a321d0144e02b5435d8e2a49\"" Aug 13 07:09:18.788510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811884749.mount: Deactivated successfully. Aug 13 07:09:18.813895 containerd[1586]: time="2025-08-13T07:09:18.813349866Z" level=info msg="StartContainer for \"d6fa06f6d76d13fe8e061c2c3da9a4e7c32dbd23a321d0144e02b5435d8e2a49\"" Aug 13 07:09:19.000028 containerd[1586]: time="2025-08-13T07:09:18.999974290Z" level=info msg="StartContainer for \"d6fa06f6d76d13fe8e061c2c3da9a4e7c32dbd23a321d0144e02b5435d8e2a49\" returns successfully" Aug 13 07:09:19.443468 kubelet[2669]: I0813 07:09:19.443398 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c4567fdc-7fml2" podStartSLOduration=26.966874154 podStartE2EDuration="37.443365394s" podCreationTimestamp="2025-08-13 07:08:42 +0000 UTC" firstStartedPulling="2025-08-13 07:09:08.231127597 +0000 UTC m=+45.634291020" lastFinishedPulling="2025-08-13 07:09:18.707618825 +0000 UTC m=+56.110782260" observedRunningTime="2025-08-13 07:09:19.441719198 +0000 UTC m=+56.844882635" watchObservedRunningTime="2025-08-13 07:09:19.443365394 +0000 UTC m=+56.846528830" Aug 13 07:09:19.577231 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:19.574812 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:19.574843 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:20.438394 kubelet[2669]: I0813 07:09:20.438334 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:21.594605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3956283363.mount: Deactivated successfully. Aug 13 07:09:22.403670 containerd[1586]: time="2025-08-13T07:09:22.403436938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.406612 containerd[1586]: time="2025-08-13T07:09:22.406530220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:09:22.413942 containerd[1586]: time="2025-08-13T07:09:22.413850756Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.415836 containerd[1586]: time="2025-08-13T07:09:22.415722232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:22.416635 containerd[1586]: time="2025-08-13T07:09:22.416604000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.707097638s" Aug 13 07:09:22.416971 containerd[1586]: time="2025-08-13T07:09:22.416764812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:09:22.505901 containerd[1586]: time="2025-08-13T07:09:22.505685997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:09:22.655860 containerd[1586]: time="2025-08-13T07:09:22.655638659Z" level=info msg="CreateContainer within sandbox \"035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:09:22.960634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784098349.mount: Deactivated successfully. Aug 13 07:09:23.031706 containerd[1586]: time="2025-08-13T07:09:23.031464132Z" level=info msg="CreateContainer within sandbox \"035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1afe514e61f4b930d204016d50d9d404f051e6818ee721d62ee003dea2afd402\"" Aug 13 07:09:23.278370 containerd[1586]: time="2025-08-13T07:09:23.276996761Z" level=info msg="StartContainer for \"1afe514e61f4b930d204016d50d9d404f051e6818ee721d62ee003dea2afd402\"" Aug 13 07:09:23.417331 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:23.414949 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:23.414997 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:23.489042 containerd[1586]: time="2025-08-13T07:09:23.488874951Z" level=info msg="StopPodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\"" Aug 13 07:09:23.563300 systemd[1]: Started sshd@8-165.232.152.216:22-139.178.89.65:53998.service - OpenSSH per-connection server daemon (139.178.89.65:53998). Aug 13 07:09:23.795578 sshd[5644]: Accepted publickey for core from 139.178.89.65 port 53998 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:23.800376 sshd[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:23.813720 systemd-logind[1556]: New session 9 of user core. Aug 13 07:09:23.818337 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:09:23.888031 systemd[1]: run-containerd-runc-k8s.io-1afe514e61f4b930d204016d50d9d404f051e6818ee721d62ee003dea2afd402-runc.dEsjXg.mount: Deactivated successfully. Aug 13 07:09:24.096612 containerd[1586]: time="2025-08-13T07:09:24.096393945Z" level=info msg="StartContainer for \"1afe514e61f4b930d204016d50d9d404f051e6818ee721d62ee003dea2afd402\" returns successfully" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.318 [WARNING][5651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe78265f-f9af-4623-a337-884c31c36ef2", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7", Pod:"coredns-7c65d6cfc9-ctkfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d7ee94f077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.325 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.325 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" iface="eth0" netns="" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.325 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.325 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.903 [INFO][5690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.914 [INFO][5690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.917 [INFO][5690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.956 [WARNING][5690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.958 [INFO][5690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.961 [INFO][5690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:24.985588 containerd[1586]: 2025-08-13 07:09:24.977 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:24.985588 containerd[1586]: time="2025-08-13T07:09:24.984955006Z" level=info msg="TearDown network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" successfully" Aug 13 07:09:24.985588 containerd[1586]: time="2025-08-13T07:09:24.984989263Z" level=info msg="StopPodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" returns successfully" Aug 13 07:09:25.124050 containerd[1586]: time="2025-08-13T07:09:25.121955787Z" level=info msg="RemovePodSandbox for \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\"" Aug 13 07:09:25.124050 containerd[1586]: time="2025-08-13T07:09:25.122755590Z" level=info msg="Forcibly stopping sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\"" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.276 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe78265f-f9af-4623-a337-884c31c36ef2", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"6f594653253a9ca73383fa223fe8397be73bd2375e997c2374474f8f029437b7", Pod:"coredns-7c65d6cfc9-ctkfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d7ee94f077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.276 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.276 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" iface="eth0" netns="" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.276 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.276 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.349 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.349 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.349 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.360 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.360 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" HandleID="k8s-pod-network.11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--ctkfn-eth0" Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.367 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:25.385050 containerd[1586]: 2025-08-13 07:09:25.376 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b" Aug 13 07:09:25.385050 containerd[1586]: time="2025-08-13T07:09:25.385016916Z" level=info msg="TearDown network for sandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" successfully" Aug 13 07:09:25.412640 containerd[1586]: time="2025-08-13T07:09:25.412303018Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:25.448100 containerd[1586]: time="2025-08-13T07:09:25.447559652Z" level=info msg="RemovePodSandbox \"11d018c38f755e23460eea4e46d1881fbf6b10374f643c1c3cc848a083fa7c8b\" returns successfully" Aug 13 07:09:25.464679 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:25.463272 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:25.463335 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:25.472097 containerd[1586]: time="2025-08-13T07:09:25.472035898Z" level=info msg="StopPodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\"" Aug 13 07:09:25.539525 sshd[5644]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:25.549414 systemd[1]: sshd@8-165.232.152.216:22-139.178.89.65:53998.service: Deactivated successfully. Aug 13 07:09:25.563106 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:09:25.563756 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:09:25.567904 systemd-logind[1556]: Removed session 9. Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.645 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0", GenerateName:"calico-kube-controllers-5df6b56ccd-", Namespace:"calico-system", SelfLink:"", UID:"c568768c-f9a9-47ff-bd2e-11cbdcfd7596", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df6b56ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8", Pod:"calico-kube-controllers-5df6b56ccd-xwrlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali255363b998c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.646 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.646 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" iface="eth0" netns="" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.646 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.646 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.707 [INFO][5768] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.708 [INFO][5768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.708 [INFO][5768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.718 [WARNING][5768] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.718 [INFO][5768] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.721 [INFO][5768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:25.736132 containerd[1586]: 2025-08-13 07:09:25.728 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.738369 containerd[1586]: time="2025-08-13T07:09:25.736176230Z" level=info msg="TearDown network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" successfully" Aug 13 07:09:25.738369 containerd[1586]: time="2025-08-13T07:09:25.736203757Z" level=info msg="StopPodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" returns successfully" Aug 13 07:09:25.739320 containerd[1586]: time="2025-08-13T07:09:25.739279509Z" level=info msg="RemovePodSandbox for \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\"" Aug 13 07:09:25.739423 containerd[1586]: time="2025-08-13T07:09:25.739337003Z" level=info msg="Forcibly stopping sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\"" Aug 13 07:09:25.775816 containerd[1586]: time="2025-08-13T07:09:25.775495633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.776671 containerd[1586]: time="2025-08-13T07:09:25.776624723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:09:25.776995 containerd[1586]: time="2025-08-13T07:09:25.776973977Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.779925 containerd[1586]: time="2025-08-13T07:09:25.779886541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:09:25.782011 containerd[1586]: time="2025-08-13T07:09:25.781965385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.276236145s" Aug 13 07:09:25.782011 containerd[1586]: time="2025-08-13T07:09:25.782009735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:09:25.807270 containerd[1586]: time="2025-08-13T07:09:25.807071105Z" level=info msg="CreateContainer within sandbox \"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:09:25.818757 containerd[1586]: time="2025-08-13T07:09:25.818618911Z" level=info msg="CreateContainer within sandbox \"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"01fa6e632d53d2739515e9ea747058b22ffe00ae3198a73d34b24b7c3cadc205\"" Aug 13 07:09:25.820357 containerd[1586]: time="2025-08-13T07:09:25.820133557Z" level=info msg="StartContainer for \"01fa6e632d53d2739515e9ea747058b22ffe00ae3198a73d34b24b7c3cadc205\"" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.846 [WARNING][5782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0", GenerateName:"calico-kube-controllers-5df6b56ccd-", Namespace:"calico-system", SelfLink:"", UID:"c568768c-f9a9-47ff-bd2e-11cbdcfd7596", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df6b56ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"c284b62b549fc6c3f9acfe7002d07f2d4bec6cc7de5717ffc35e399837200ca8", Pod:"calico-kube-controllers-5df6b56ccd-xwrlr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali255363b998c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.848 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.849 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" iface="eth0" netns="" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.849 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.849 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.931 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.932 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.932 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.950 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.950 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" HandleID="k8s-pod-network.becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--kube--controllers--5df6b56ccd--xwrlr-eth0" Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.957 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:25.985073 containerd[1586]: 2025-08-13 07:09:25.979 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570" Aug 13 07:09:25.988354 containerd[1586]: time="2025-08-13T07:09:25.985144948Z" level=info msg="TearDown network for sandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" successfully" Aug 13 07:09:25.995769 containerd[1586]: time="2025-08-13T07:09:25.994594169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:25.995769 containerd[1586]: time="2025-08-13T07:09:25.994706594Z" level=info msg="RemovePodSandbox \"becb155c65846e6e5bfa40110af1688deadc29a5acf9ac0f0de072d1ba9f2570\" returns successfully" Aug 13 07:09:25.996429 containerd[1586]: time="2025-08-13T07:09:25.996395637Z" level=info msg="StopPodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\"" Aug 13 07:09:26.212664 containerd[1586]: time="2025-08-13T07:09:26.212619654Z" level=info msg="StartContainer for \"01fa6e632d53d2739515e9ea747058b22ffe00ae3198a73d34b24b7c3cadc205\" returns successfully" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.116 [WARNING][5818] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"99077a63-9db7-4cec-a6a2-af9cb28b57de", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6", Pod:"calico-apiserver-5b5498b48d-m2ftn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0d6015b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.117 [INFO][5818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.117 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" iface="eth0" netns="" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.117 [INFO][5818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.117 [INFO][5818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.209 [INFO][5849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.210 [INFO][5849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.210 [INFO][5849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.219 [WARNING][5849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.219 [INFO][5849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.221 [INFO][5849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:26.227462 containerd[1586]: 2025-08-13 07:09:26.224 [INFO][5818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.229501 containerd[1586]: time="2025-08-13T07:09:26.227643029Z" level=info msg="TearDown network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" successfully" Aug 13 07:09:26.229501 containerd[1586]: time="2025-08-13T07:09:26.227670925Z" level=info msg="StopPodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" returns successfully" Aug 13 07:09:26.229501 containerd[1586]: time="2025-08-13T07:09:26.228576627Z" level=info msg="RemovePodSandbox for \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\"" Aug 13 07:09:26.229501 containerd[1586]: time="2025-08-13T07:09:26.228613908Z" level=info msg="Forcibly stopping sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\"" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.297 [WARNING][5883] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"99077a63-9db7-4cec-a6a2-af9cb28b57de", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6", Pod:"calico-apiserver-5b5498b48d-m2ftn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0d6015b61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.297 [INFO][5883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.297 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" iface="eth0" netns="" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.297 [INFO][5883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.297 [INFO][5883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.329 [INFO][5896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.329 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.330 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.341 [WARNING][5896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.341 [INFO][5896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" HandleID="k8s-pod-network.451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.343 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:26.353571 containerd[1586]: 2025-08-13 07:09:26.348 [INFO][5883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c" Aug 13 07:09:26.353571 containerd[1586]: time="2025-08-13T07:09:26.353371467Z" level=info msg="TearDown network for sandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" successfully" Aug 13 07:09:26.367846 containerd[1586]: time="2025-08-13T07:09:26.367627020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:26.367846 containerd[1586]: time="2025-08-13T07:09:26.367737197Z" level=info msg="RemovePodSandbox \"451b8e31d61aebad949c2b73f4e1ab7260d195789273c492bc37273d23f6d47c\" returns successfully" Aug 13 07:09:26.369359 containerd[1586]: time="2025-08-13T07:09:26.368815661Z" level=info msg="StopPodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\"" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.430 [WARNING][5910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd634c8d-a482-4f95-9b3b-58b3c5eafd08", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb", Pod:"csi-node-driver-2gcv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b2acf9bc98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.430 [INFO][5910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.430 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" iface="eth0" netns="" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.430 [INFO][5910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.430 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.469 [INFO][5917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.469 [INFO][5917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.469 [INFO][5917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.479 [WARNING][5917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.479 [INFO][5917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.485 [INFO][5917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:26.492454 containerd[1586]: 2025-08-13 07:09:26.490 [INFO][5910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.494177 containerd[1586]: time="2025-08-13T07:09:26.492951303Z" level=info msg="TearDown network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" successfully" Aug 13 07:09:26.494177 containerd[1586]: time="2025-08-13T07:09:26.492986764Z" level=info msg="StopPodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" returns successfully" Aug 13 07:09:26.494177 containerd[1586]: time="2025-08-13T07:09:26.493582000Z" level=info msg="RemovePodSandbox for \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\"" Aug 13 07:09:26.494177 containerd[1586]: time="2025-08-13T07:09:26.493627488Z" level=info msg="Forcibly stopping sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\"" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.551 [WARNING][5931] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd634c8d-a482-4f95-9b3b-58b3c5eafd08", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"b53b21dc3027be03205d46c7aa240ca28f37f7e10db2cd00905d71045243d3fb", Pod:"csi-node-driver-2gcv6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b2acf9bc98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.551 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.551 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" iface="eth0" netns="" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.551 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.551 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.580 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.580 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.580 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.588 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.589 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" HandleID="k8s-pod-network.642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Workload="ci--4081.3.5--e--55e36c071a-k8s-csi--node--driver--2gcv6-eth0" Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.592 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:26.597431 containerd[1586]: 2025-08-13 07:09:26.594 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04" Aug 13 07:09:26.599155 containerd[1586]: time="2025-08-13T07:09:26.597565274Z" level=info msg="TearDown network for sandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" successfully" Aug 13 07:09:26.603532 containerd[1586]: time="2025-08-13T07:09:26.603436132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:26.603532 containerd[1586]: time="2025-08-13T07:09:26.603522488Z" level=info msg="RemovePodSandbox \"642079bee28e1d27048e5c14574cf76a2f8dd16f307528f043634656e21eab04\" returns successfully" Aug 13 07:09:26.604743 containerd[1586]: time="2025-08-13T07:09:26.604285918Z" level=info msg="StopPodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\"" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.651 [WARNING][5952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"30df6160-80aa-4f0d-92aa-0f0db6a04acd", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e", Pod:"goldmane-58fd7646b9-v27pf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac31d133648", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.651 [INFO][5952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.651 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" iface="eth0" netns="" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.651 [INFO][5952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.651 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.690 [INFO][5959] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.690 [INFO][5959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.690 [INFO][5959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.701 [WARNING][5959] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.701 [INFO][5959] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.704 [INFO][5959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:26.708617 containerd[1586]: 2025-08-13 07:09:26.706 [INFO][5952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.709728 containerd[1586]: time="2025-08-13T07:09:26.709219431Z" level=info msg="TearDown network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" successfully" Aug 13 07:09:26.709728 containerd[1586]: time="2025-08-13T07:09:26.709261764Z" level=info msg="StopPodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" returns successfully" Aug 13 07:09:26.710018 containerd[1586]: time="2025-08-13T07:09:26.709970157Z" level=info msg="RemovePodSandbox for \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\"" Aug 13 07:09:26.710067 containerd[1586]: time="2025-08-13T07:09:26.710036137Z" level=info msg="Forcibly stopping sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\"" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.764 [WARNING][5973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"30df6160-80aa-4f0d-92aa-0f0db6a04acd", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"035825a25cb3cbdc96a57dde4cba8f3884c3f8e978bbbc94f4de15c081fe139e", Pod:"goldmane-58fd7646b9-v27pf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac31d133648", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.764 [INFO][5973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.764 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" iface="eth0" netns="" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.764 [INFO][5973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.764 [INFO][5973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.796 [INFO][5981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.797 [INFO][5981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.797 [INFO][5981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.805 [WARNING][5981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.806 [INFO][5981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" HandleID="k8s-pod-network.5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Workload="ci--4081.3.5--e--55e36c071a-k8s-goldmane--58fd7646b9--v27pf-eth0" Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.811 [INFO][5981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:26.821494 containerd[1586]: 2025-08-13 07:09:26.816 [INFO][5973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde" Aug 13 07:09:26.822524 containerd[1586]: time="2025-08-13T07:09:26.821507320Z" level=info msg="TearDown network for sandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" successfully" Aug 13 07:09:26.836012 containerd[1586]: time="2025-08-13T07:09:26.835944403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:26.836194 containerd[1586]: time="2025-08-13T07:09:26.836060695Z" level=info msg="RemovePodSandbox \"5cc5813dad35133d31dbac91e036f2553767ff812a2987e8cb1ba83650500bde\" returns successfully" Aug 13 07:09:26.837796 containerd[1586]: time="2025-08-13T07:09:26.837735310Z" level=info msg="StopPodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\"" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:26.984 [WARNING][5995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0685533-643f-4d9d-85a8-1d45cf68c77e", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43", Pod:"calico-apiserver-5b5498b48d-ns55v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9d6eb3a1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:26.988 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:26.988 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" iface="eth0" netns="" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:26.988 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:26.988 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.096 [INFO][6017] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.097 [INFO][6017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.097 [INFO][6017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.109 [WARNING][6017] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.109 [INFO][6017] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.113 [INFO][6017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.134651 containerd[1586]: 2025-08-13 07:09:27.127 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.134651 containerd[1586]: time="2025-08-13T07:09:27.134249153Z" level=info msg="TearDown network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" successfully" Aug 13 07:09:27.134651 containerd[1586]: time="2025-08-13T07:09:27.134272141Z" level=info msg="StopPodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" returns successfully" Aug 13 07:09:27.137337 containerd[1586]: time="2025-08-13T07:09:27.136594932Z" level=info msg="RemovePodSandbox for \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\"" Aug 13 07:09:27.137337 containerd[1586]: time="2025-08-13T07:09:27.136627510Z" level=info msg="Forcibly stopping sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\"" Aug 13 07:09:27.172547 kubelet[2669]: I0813 07:09:27.168647 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-v27pf" podStartSLOduration=29.307155072 podStartE2EDuration="43.061202114s" podCreationTimestamp="2025-08-13 07:08:44 +0000 UTC" firstStartedPulling="2025-08-13 07:09:08.702775091 +0000 UTC m=+46.105938506" lastFinishedPulling="2025-08-13 07:09:22.456822119 +0000 UTC m=+59.859985548" observedRunningTime="2025-08-13 07:09:24.918649868 +0000 UTC m=+62.321813324" watchObservedRunningTime="2025-08-13 07:09:27.061202114 +0000 UTC m=+64.464365550" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.222 [WARNING][6037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0", GenerateName:"calico-apiserver-5b5498b48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0685533-643f-4d9d-85a8-1d45cf68c77e", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5498b48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43", Pod:"calico-apiserver-5b5498b48d-ns55v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie9d6eb3a1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.224 [INFO][6037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.225 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" iface="eth0" netns="" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.225 [INFO][6037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.225 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.284 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.284 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.284 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.304 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.304 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" HandleID="k8s-pod-network.c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.308 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.314964 containerd[1586]: 2025-08-13 07:09:27.311 [INFO][6037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334" Aug 13 07:09:27.316168 containerd[1586]: time="2025-08-13T07:09:27.315036210Z" level=info msg="TearDown network for sandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" successfully" Aug 13 07:09:27.326016 containerd[1586]: time="2025-08-13T07:09:27.325948448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:27.326220 containerd[1586]: time="2025-08-13T07:09:27.326127651Z" level=info msg="RemovePodSandbox \"c5fc592d5164dbe800c9a0d90113f61da7a6a5fe5f5c8f945d5e978660358334\" returns successfully" Aug 13 07:09:27.328907 containerd[1586]: time="2025-08-13T07:09:27.328851631Z" level=info msg="StopPodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\"" Aug 13 07:09:27.356747 kubelet[2669]: I0813 07:09:27.356679 2669 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:09:27.357138 kubelet[2669]: I0813 07:09:27.357114 2669 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.408 [WARNING][6059] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0", GenerateName:"calico-apiserver-7c4567fdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"20826453-f382-41f8-a572-6376d276da48", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c4567fdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba", Pod:"calico-apiserver-7c4567fdc-7fml2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f73c591efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.409 [INFO][6059] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.409 [INFO][6059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" iface="eth0" netns="" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.409 [INFO][6059] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.409 [INFO][6059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.448 [INFO][6067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.448 [INFO][6067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.448 [INFO][6067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.457 [WARNING][6067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.457 [INFO][6067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.459 [INFO][6067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.465068 containerd[1586]: 2025-08-13 07:09:27.462 [INFO][6059] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.467557 containerd[1586]: time="2025-08-13T07:09:27.465688711Z" level=info msg="TearDown network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" successfully" Aug 13 07:09:27.467557 containerd[1586]: time="2025-08-13T07:09:27.465729616Z" level=info msg="StopPodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" returns successfully" Aug 13 07:09:27.467557 containerd[1586]: time="2025-08-13T07:09:27.466538254Z" level=info msg="RemovePodSandbox for \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\"" Aug 13 07:09:27.467557 containerd[1586]: time="2025-08-13T07:09:27.466577089Z" level=info msg="Forcibly stopping sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\"" Aug 13 07:09:27.512509 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:27.510353 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:27.510366 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.517 [WARNING][6082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0", GenerateName:"calico-apiserver-7c4567fdc-", Namespace:"calico-apiserver", SelfLink:"", UID:"20826453-f382-41f8-a572-6376d276da48", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c4567fdc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"ec215d093adf98c63c7448a18ea82f32f35754187234b99e4f129497b5f2e2ba", Pod:"calico-apiserver-7c4567fdc-7fml2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f73c591efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.518 [INFO][6082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.518 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" iface="eth0" netns="" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.518 [INFO][6082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.518 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.563 [INFO][6089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.563 [INFO][6089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.563 [INFO][6089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.571 [WARNING][6089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.571 [INFO][6089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" HandleID="k8s-pod-network.c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--7c4567fdc--7fml2-eth0" Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.574 [INFO][6089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.579923 containerd[1586]: 2025-08-13 07:09:27.577 [INFO][6082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba" Aug 13 07:09:27.580945 containerd[1586]: time="2025-08-13T07:09:27.579919249Z" level=info msg="TearDown network for sandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" successfully" Aug 13 07:09:27.584765 containerd[1586]: time="2025-08-13T07:09:27.584698015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:27.584941 containerd[1586]: time="2025-08-13T07:09:27.584848380Z" level=info msg="RemovePodSandbox \"c5583518cf9ace2845efdea0211b2c59bde223d53bf02ec7a2f377ee3dec06ba\" returns successfully" Aug 13 07:09:27.585877 containerd[1586]: time="2025-08-13T07:09:27.585474240Z" level=info msg="StopPodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\"" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.647 [WARNING][6103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"72e07df5-008f-4b2d-94ac-56f5a048d8f4", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f", Pod:"coredns-7c65d6cfc9-5njd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4108fb091d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.647 [INFO][6103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.647 [INFO][6103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" iface="eth0" netns="" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.647 [INFO][6103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.647 [INFO][6103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.678 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.678 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.678 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.685 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.686 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.688 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.693987 containerd[1586]: 2025-08-13 07:09:27.691 [INFO][6103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.695258 containerd[1586]: time="2025-08-13T07:09:27.694045800Z" level=info msg="TearDown network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" successfully" Aug 13 07:09:27.695258 containerd[1586]: time="2025-08-13T07:09:27.694111585Z" level=info msg="StopPodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" returns successfully" Aug 13 07:09:27.695258 containerd[1586]: time="2025-08-13T07:09:27.695064078Z" level=info msg="RemovePodSandbox for \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\"" Aug 13 07:09:27.695258 containerd[1586]: time="2025-08-13T07:09:27.695209442Z" level=info msg="Forcibly stopping sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\"" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.749 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"72e07df5-008f-4b2d-94ac-56f5a048d8f4", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-e-55e36c071a", ContainerID:"0ee0abd1d223575d098213e3722e6bb12a37c7e6627a9cba9e841976ddab3b8f", Pod:"coredns-7c65d6cfc9-5njd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4108fb091d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.749 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.749 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" iface="eth0" netns="" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.749 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.749 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.780 [INFO][6132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.780 [INFO][6132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.780 [INFO][6132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.791 [WARNING][6132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.791 [INFO][6132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" HandleID="k8s-pod-network.5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Workload="ci--4081.3.5--e--55e36c071a-k8s-coredns--7c65d6cfc9--5njd8-eth0" Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.794 [INFO][6132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.800423 containerd[1586]: 2025-08-13 07:09:27.798 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1" Aug 13 07:09:27.801927 containerd[1586]: time="2025-08-13T07:09:27.800421173Z" level=info msg="TearDown network for sandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" successfully" Aug 13 07:09:27.804071 containerd[1586]: time="2025-08-13T07:09:27.804025491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:27.804179 containerd[1586]: time="2025-08-13T07:09:27.804109510Z" level=info msg="RemovePodSandbox \"5cc417e63f4c4814c8b76ea80bb0e6076b9309c95d49d290a268334d48ee11f1\" returns successfully" Aug 13 07:09:27.804704 containerd[1586]: time="2025-08-13T07:09:27.804675457Z" level=info msg="StopPodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\"" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.852 [WARNING][6146] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.852 [INFO][6146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.852 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" iface="eth0" netns="" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.852 [INFO][6146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.852 [INFO][6146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.889 [INFO][6153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.889 [INFO][6153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.889 [INFO][6153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.897 [WARNING][6153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.897 [INFO][6153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.901 [INFO][6153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:27.905935 containerd[1586]: 2025-08-13 07:09:27.903 [INFO][6146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:27.907703 containerd[1586]: time="2025-08-13T07:09:27.905993410Z" level=info msg="TearDown network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" successfully" Aug 13 07:09:27.907703 containerd[1586]: time="2025-08-13T07:09:27.906020239Z" level=info msg="StopPodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" returns successfully" Aug 13 07:09:27.907703 containerd[1586]: time="2025-08-13T07:09:27.906898087Z" level=info msg="RemovePodSandbox for \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\"" Aug 13 07:09:27.907703 containerd[1586]: time="2025-08-13T07:09:27.906927394Z" level=info msg="Forcibly stopping sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\"" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.958 [WARNING][6167] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" WorkloadEndpoint="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.958 [INFO][6167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.958 [INFO][6167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" iface="eth0" netns="" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.958 [INFO][6167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.958 [INFO][6167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.985 [INFO][6175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.985 [INFO][6175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.986 [INFO][6175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.995 [WARNING][6175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.995 [INFO][6175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" HandleID="k8s-pod-network.28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Workload="ci--4081.3.5--e--55e36c071a-k8s-whisker--5b967454b--hf5vt-eth0" Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:27.999 [INFO][6175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:28.008149 containerd[1586]: 2025-08-13 07:09:28.004 [INFO][6167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605" Aug 13 07:09:28.008656 containerd[1586]: time="2025-08-13T07:09:28.008198037Z" level=info msg="TearDown network for sandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" successfully" Aug 13 07:09:28.013398 containerd[1586]: time="2025-08-13T07:09:28.012887900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:09:28.013398 containerd[1586]: time="2025-08-13T07:09:28.013134883Z" level=info msg="RemovePodSandbox \"28d66a23146e972b69db5cf7c52ce96f9b161383cd425fad0c05a9c7a15a8605\" returns successfully" Aug 13 07:09:30.556202 systemd[1]: Started sshd@9-165.232.152.216:22-139.178.89.65:43656.service - OpenSSH per-connection server daemon (139.178.89.65:43656). Aug 13 07:09:30.665532 sshd[6181]: Accepted publickey for core from 139.178.89.65 port 43656 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:30.669008 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:30.677704 systemd-logind[1556]: New session 10 of user core. Aug 13 07:09:30.682563 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:09:31.272076 kubelet[2669]: I0813 07:09:31.272007 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:31.277778 sshd[6181]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:31.290194 systemd[1]: Started sshd@10-165.232.152.216:22-139.178.89.65:43664.service - OpenSSH per-connection server daemon (139.178.89.65:43664). Aug 13 07:09:31.290751 systemd[1]: sshd@9-165.232.152.216:22-139.178.89.65:43656.service: Deactivated successfully. Aug 13 07:09:31.302931 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:09:31.303218 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:09:31.307837 systemd-logind[1556]: Removed session 10. Aug 13 07:09:31.363687 sshd[6195]: Accepted publickey for core from 139.178.89.65 port 43664 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:31.365645 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:31.372292 systemd-logind[1556]: New session 11 of user core. Aug 13 07:09:31.378290 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:09:31.439442 kubelet[2669]: I0813 07:09:31.437856 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2gcv6" podStartSLOduration=28.847343132 podStartE2EDuration="46.437833923s" podCreationTimestamp="2025-08-13 07:08:45 +0000 UTC" firstStartedPulling="2025-08-13 07:09:08.200521104 +0000 UTC m=+45.603684543" lastFinishedPulling="2025-08-13 07:09:25.791011918 +0000 UTC m=+63.194175334" observedRunningTime="2025-08-13 07:09:27.185213327 +0000 UTC m=+64.588376756" watchObservedRunningTime="2025-08-13 07:09:31.437833923 +0000 UTC m=+68.840997360" Aug 13 07:09:31.734930 sshd[6195]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:31.757504 systemd[1]: Started sshd@11-165.232.152.216:22-139.178.89.65:43672.service - OpenSSH per-connection server daemon (139.178.89.65:43672). Aug 13 07:09:31.758974 systemd[1]: sshd@10-165.232.152.216:22-139.178.89.65:43664.service: Deactivated successfully. Aug 13 07:09:31.764914 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:09:31.765620 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:09:31.772252 systemd-logind[1556]: Removed session 11. Aug 13 07:09:31.884868 sshd[6210]: Accepted publickey for core from 139.178.89.65 port 43672 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:31.888708 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:31.896335 systemd-logind[1556]: New session 12 of user core. Aug 13 07:09:31.901671 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:09:31.954767 systemd[1]: run-containerd-runc-k8s.io-1afe514e61f4b930d204016d50d9d404f051e6818ee721d62ee003dea2afd402-runc.y20X4a.mount: Deactivated successfully. Aug 13 07:09:32.105457 sshd[6210]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:32.118515 systemd[1]: sshd@11-165.232.152.216:22-139.178.89.65:43672.service: Deactivated successfully. Aug 13 07:09:32.130987 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:09:32.138835 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:09:32.145629 systemd-logind[1556]: Removed session 12. Aug 13 07:09:33.400181 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:33.398666 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:33.398677 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:36.744529 kubelet[2669]: E0813 07:09:36.744298 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:37.112542 systemd[1]: Started sshd@12-165.232.152.216:22-139.178.89.65:43688.service - OpenSSH per-connection server daemon (139.178.89.65:43688). Aug 13 07:09:37.226425 sshd[6259]: Accepted publickey for core from 139.178.89.65 port 43688 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:37.231613 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:37.239240 systemd-logind[1556]: New session 13 of user core. Aug 13 07:09:37.241259 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:09:37.565604 sshd[6259]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:37.575943 systemd[1]: sshd@12-165.232.152.216:22-139.178.89.65:43688.service: Deactivated successfully. Aug 13 07:09:37.583274 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:09:37.585596 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:09:37.587673 systemd-logind[1556]: Removed session 13. Aug 13 07:09:39.415935 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:39.414453 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:39.414463 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:42.578310 systemd[1]: Started sshd@13-165.232.152.216:22-139.178.89.65:60578.service - OpenSSH per-connection server daemon (139.178.89.65:60578). Aug 13 07:09:42.680882 sshd[6293]: Accepted publickey for core from 139.178.89.65 port 60578 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:42.688150 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:42.705415 systemd-logind[1556]: New session 14 of user core. Aug 13 07:09:42.712167 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:09:42.759186 kubelet[2669]: E0813 07:09:42.759140 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:43.105620 sshd[6293]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:43.123845 systemd[1]: sshd@13-165.232.152.216:22-139.178.89.65:60578.service: Deactivated successfully. Aug 13 07:09:43.128317 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:09:43.132304 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:09:43.139170 systemd-logind[1556]: Removed session 14. Aug 13 07:09:44.805126 kubelet[2669]: I0813 07:09:44.805088 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:48.118527 systemd[1]: Started sshd@14-165.232.152.216:22-139.178.89.65:60590.service - OpenSSH per-connection server daemon (139.178.89.65:60590). Aug 13 07:09:48.226940 sshd[6309]: Accepted publickey for core from 139.178.89.65 port 60590 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:48.230108 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:48.237178 systemd-logind[1556]: New session 15 of user core. Aug 13 07:09:48.242430 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:09:48.919673 sshd[6309]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:48.924154 systemd[1]: sshd@14-165.232.152.216:22-139.178.89.65:60590.service: Deactivated successfully. Aug 13 07:09:48.930167 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:09:48.931884 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:09:48.933937 systemd-logind[1556]: Removed session 15. Aug 13 07:09:49.116054 kubelet[2669]: I0813 07:09:49.115609 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:09:49.403346 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:49.403724 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:49.403735 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:49.446355 containerd[1586]: time="2025-08-13T07:09:49.446280822Z" level=info msg="StopContainer for \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\" with timeout 30 (s)" Aug 13 07:09:49.454169 containerd[1586]: time="2025-08-13T07:09:49.454092432Z" level=info msg="Stop container \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\" with signal terminated" Aug 13 07:09:49.610865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660-rootfs.mount: Deactivated successfully. Aug 13 07:09:49.641658 containerd[1586]: time="2025-08-13T07:09:49.609210833Z" level=info msg="shim disconnected" id=49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660 namespace=k8s.io Aug 13 07:09:49.641658 containerd[1586]: time="2025-08-13T07:09:49.641378815Z" level=warning msg="cleaning up after shim disconnected" id=49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660 namespace=k8s.io Aug 13 07:09:49.641658 containerd[1586]: time="2025-08-13T07:09:49.641397519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:49.735621 kubelet[2669]: E0813 07:09:49.734069 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:49.779433 containerd[1586]: time="2025-08-13T07:09:49.779309773Z" level=info msg="StopContainer for \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\" returns successfully" Aug 13 07:09:49.788490 containerd[1586]: time="2025-08-13T07:09:49.788292416Z" level=info msg="StopPodSandbox for \"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43\"" Aug 13 07:09:49.793706 containerd[1586]: time="2025-08-13T07:09:49.793410853Z" level=info msg="Container to stop \"49a4c613919cc63ffecffdd00ec3bf389830ad525a88f572a3780920b76fa660\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:09:49.800662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43-shm.mount: Deactivated successfully. Aug 13 07:09:49.860991 containerd[1586]: time="2025-08-13T07:09:49.860925131Z" level=info msg="shim disconnected" id=2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43 namespace=k8s.io Aug 13 07:09:49.861727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43-rootfs.mount: Deactivated successfully. Aug 13 07:09:49.863273 containerd[1586]: time="2025-08-13T07:09:49.862106203Z" level=warning msg="cleaning up after shim disconnected" id=2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43 namespace=k8s.io Aug 13 07:09:49.863273 containerd[1586]: time="2025-08-13T07:09:49.862139184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:50.066282 systemd-networkd[1223]: calie9d6eb3a1d6: Link DOWN Aug 13 07:09:50.066357 systemd-networkd[1223]: calie9d6eb3a1d6: Lost carrier Aug 13 07:09:50.228946 kubelet[2669]: I0813 07:09:50.226571 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.045 [INFO][6399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.047 [INFO][6399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" iface="eth0" netns="/var/run/netns/cni-775ebfaa-ef37-689b-cb7c-e3ce6eb86845" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.047 [INFO][6399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" iface="eth0" netns="/var/run/netns/cni-775ebfaa-ef37-689b-cb7c-e3ce6eb86845" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.065 [INFO][6399] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" after=17.863817ms iface="eth0" netns="/var/run/netns/cni-775ebfaa-ef37-689b-cb7c-e3ce6eb86845" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.065 [INFO][6399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.065 [INFO][6399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.260 [INFO][6406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" HandleID="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.263 [INFO][6406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.263 [INFO][6406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.361 [INFO][6406] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" HandleID="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.361 [INFO][6406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" HandleID="k8s-pod-network.2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--ns55v-eth0" Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.363 [INFO][6406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:50.370289 containerd[1586]: 2025-08-13 07:09:50.366 [INFO][6399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43" Aug 13 07:09:50.370289 containerd[1586]: time="2025-08-13T07:09:50.369896764Z" level=info msg="TearDown network for sandbox \"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43\" successfully" Aug 13 07:09:50.370289 containerd[1586]: time="2025-08-13T07:09:50.369938233Z" level=info msg="StopPodSandbox for \"2acb6c734bffda8808ef9d79d65ecf8a12268556469f2d496bce8b1229dcba43\" returns successfully" Aug 13 07:09:50.375684 systemd[1]: run-netns-cni\x2d775ebfaa\x2def37\x2d689b\x2dcb7c\x2de3ce6eb86845.mount: Deactivated successfully. Aug 13 07:09:50.551323 kubelet[2669]: I0813 07:09:50.550021 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcsf6\" (UniqueName: \"kubernetes.io/projected/f0685533-643f-4d9d-85a8-1d45cf68c77e-kube-api-access-jcsf6\") pod \"f0685533-643f-4d9d-85a8-1d45cf68c77e\" (UID: \"f0685533-643f-4d9d-85a8-1d45cf68c77e\") " Aug 13 07:09:50.551323 kubelet[2669]: I0813 07:09:50.550130 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0685533-643f-4d9d-85a8-1d45cf68c77e-calico-apiserver-certs\") pod \"f0685533-643f-4d9d-85a8-1d45cf68c77e\" (UID: \"f0685533-643f-4d9d-85a8-1d45cf68c77e\") " Aug 13 07:09:50.596070 systemd[1]: var-lib-kubelet-pods-f0685533\x2d643f\x2d4d9d\x2d85a8\x2d1d45cf68c77e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djcsf6.mount: Deactivated successfully. Aug 13 07:09:50.601009 kubelet[2669]: I0813 07:09:50.591981 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0685533-643f-4d9d-85a8-1d45cf68c77e-kube-api-access-jcsf6" (OuterVolumeSpecName: "kube-api-access-jcsf6") pod "f0685533-643f-4d9d-85a8-1d45cf68c77e" (UID: "f0685533-643f-4d9d-85a8-1d45cf68c77e"). InnerVolumeSpecName "kube-api-access-jcsf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:09:50.608340 kubelet[2669]: I0813 07:09:50.608259 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0685533-643f-4d9d-85a8-1d45cf68c77e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f0685533-643f-4d9d-85a8-1d45cf68c77e" (UID: "f0685533-643f-4d9d-85a8-1d45cf68c77e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:09:50.615097 systemd[1]: var-lib-kubelet-pods-f0685533\x2d643f\x2d4d9d\x2d85a8\x2d1d45cf68c77e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 07:09:50.651391 kubelet[2669]: I0813 07:09:50.651155 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcsf6\" (UniqueName: \"kubernetes.io/projected/f0685533-643f-4d9d-85a8-1d45cf68c77e-kube-api-access-jcsf6\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:50.651391 kubelet[2669]: I0813 07:09:50.651200 2669 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0685533-643f-4d9d-85a8-1d45cf68c77e-calico-apiserver-certs\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:52.735180 kubelet[2669]: E0813 07:09:52.735123 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:09:52.739095 kubelet[2669]: I0813 07:09:52.738868 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0685533-643f-4d9d-85a8-1d45cf68c77e" path="/var/lib/kubelet/pods/f0685533-643f-4d9d-85a8-1d45cf68c77e/volumes" Aug 13 07:09:53.932319 systemd[1]: Started sshd@15-165.232.152.216:22-139.178.89.65:51514.service - OpenSSH per-connection server daemon (139.178.89.65:51514). Aug 13 07:09:54.032029 sshd[6423]: Accepted publickey for core from 139.178.89.65 port 51514 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:54.036150 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:54.047753 systemd-logind[1556]: New session 16 of user core. Aug 13 07:09:54.053257 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:09:54.852225 sshd[6423]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:54.866030 systemd[1]: Started sshd@16-165.232.152.216:22-139.178.89.65:51526.service - OpenSSH per-connection server daemon (139.178.89.65:51526). Aug 13 07:09:54.866583 systemd[1]: sshd@15-165.232.152.216:22-139.178.89.65:51514.service: Deactivated successfully. Aug 13 07:09:54.883974 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:09:54.885056 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:09:54.891936 systemd-logind[1556]: Removed session 16. Aug 13 07:09:54.950810 sshd[6475]: Accepted publickey for core from 139.178.89.65 port 51526 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:54.952191 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:54.964915 systemd-logind[1556]: New session 17 of user core. Aug 13 07:09:54.971436 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:09:55.376009 sshd[6475]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:55.389216 systemd[1]: Started sshd@17-165.232.152.216:22-139.178.89.65:51530.service - OpenSSH per-connection server daemon (139.178.89.65:51530). Aug 13 07:09:55.392372 systemd[1]: sshd@16-165.232.152.216:22-139.178.89.65:51526.service: Deactivated successfully. Aug 13 07:09:55.408048 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:09:55.413909 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:09:55.421720 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:55.415895 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:55.415907 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:55.421077 systemd-logind[1556]: Removed session 17. Aug 13 07:09:55.481459 sshd[6489]: Accepted publickey for core from 139.178.89.65 port 51530 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:55.489349 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:55.510336 systemd-logind[1556]: New session 18 of user core. Aug 13 07:09:55.519633 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:09:57.159194 containerd[1586]: time="2025-08-13T07:09:57.158675711Z" level=info msg="StopContainer for \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\" with timeout 30 (s)" Aug 13 07:09:57.164821 containerd[1586]: time="2025-08-13T07:09:57.163953675Z" level=info msg="Stop container \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\" with signal terminated" Aug 13 07:09:57.462017 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:57.462120 systemd-resolved[1480]: Flushed all caches. Aug 13 07:09:57.468023 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:57.515029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8-rootfs.mount: Deactivated successfully. Aug 13 07:09:57.699818 containerd[1586]: time="2025-08-13T07:09:57.577456691Z" level=info msg="shim disconnected" id=50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8 namespace=k8s.io Aug 13 07:09:57.699818 containerd[1586]: time="2025-08-13T07:09:57.692042890Z" level=warning msg="cleaning up after shim disconnected" id=50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8 namespace=k8s.io Aug 13 07:09:57.699818 containerd[1586]: time="2025-08-13T07:09:57.692060453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:57.891904 containerd[1586]: time="2025-08-13T07:09:57.891110372Z" level=info msg="StopContainer for \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\" returns successfully" Aug 13 07:09:57.893814 containerd[1586]: time="2025-08-13T07:09:57.893058314Z" level=info msg="StopPodSandbox for \"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6\"" Aug 13 07:09:57.895983 containerd[1586]: time="2025-08-13T07:09:57.894063006Z" level=info msg="Container to stop \"50d5adf4f44a12608ac488c4b36c153a709e0de9879e148a5b467ee1164b81f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:09:57.907750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6-shm.mount: Deactivated successfully. Aug 13 07:09:58.057813 containerd[1586]: time="2025-08-13T07:09:58.054908043Z" level=info msg="shim disconnected" id=df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6 namespace=k8s.io Aug 13 07:09:58.057813 containerd[1586]: time="2025-08-13T07:09:58.054985768Z" level=warning msg="cleaning up after shim disconnected" id=df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6 namespace=k8s.io Aug 13 07:09:58.057813 containerd[1586]: time="2025-08-13T07:09:58.054997939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:09:58.063153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6-rootfs.mount: Deactivated successfully. Aug 13 07:09:58.144534 containerd[1586]: time="2025-08-13T07:09:58.144409135Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:09:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:09:58.305678 systemd-networkd[1223]: calibb0d6015b61: Link DOWN Aug 13 07:09:58.305687 systemd-networkd[1223]: calibb0d6015b61: Lost carrier Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.301 [INFO][6591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.303 [INFO][6591] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" iface="eth0" netns="/var/run/netns/cni-5fd8b783-e766-09e6-f6aa-3d6bcb8da2ef" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.303 [INFO][6591] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" iface="eth0" netns="/var/run/netns/cni-5fd8b783-e766-09e6-f6aa-3d6bcb8da2ef" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.328 [INFO][6591] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" after=24.917334ms iface="eth0" netns="/var/run/netns/cni-5fd8b783-e766-09e6-f6aa-3d6bcb8da2ef" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.328 [INFO][6591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.328 [INFO][6591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.639 [INFO][6600] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" HandleID="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.646 [INFO][6600] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.646 [INFO][6600] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.804 [INFO][6600] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" HandleID="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.804 [INFO][6600] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" HandleID="k8s-pod-network.df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Workload="ci--4081.3.5--e--55e36c071a-k8s-calico--apiserver--5b5498b48d--m2ftn-eth0" Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.809 [INFO][6600] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:09:58.832875 containerd[1586]: 2025-08-13 07:09:58.816 [INFO][6591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Aug 13 07:09:58.845344 containerd[1586]: time="2025-08-13T07:09:58.836987303Z" level=info msg="TearDown network for sandbox \"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6\" successfully" Aug 13 07:09:58.845344 containerd[1586]: time="2025-08-13T07:09:58.837040018Z" level=info msg="StopPodSandbox for \"df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6\" returns successfully" Aug 13 07:09:58.846582 systemd[1]: run-netns-cni\x2d5fd8b783\x2de766\x2d09e6\x2df6aa\x2d3d6bcb8da2ef.mount: Deactivated successfully. Aug 13 07:09:58.923899 kubelet[2669]: I0813 07:09:58.915047 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df81579259190dc2c5e65959533a6e3cb33788cbb025b1494f3195ea7dcaf5b6" Aug 13 07:09:59.033570 kubelet[2669]: I0813 07:09:59.033471 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94rbl\" (UniqueName: \"kubernetes.io/projected/99077a63-9db7-4cec-a6a2-af9cb28b57de-kube-api-access-94rbl\") pod \"99077a63-9db7-4cec-a6a2-af9cb28b57de\" (UID: \"99077a63-9db7-4cec-a6a2-af9cb28b57de\") " Aug 13 07:09:59.033994 kubelet[2669]: I0813 07:09:59.033971 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99077a63-9db7-4cec-a6a2-af9cb28b57de-calico-apiserver-certs\") pod \"99077a63-9db7-4cec-a6a2-af9cb28b57de\" (UID: \"99077a63-9db7-4cec-a6a2-af9cb28b57de\") " Aug 13 07:09:59.108125 systemd[1]: var-lib-kubelet-pods-99077a63\x2d9db7\x2d4cec\x2da6a2\x2daf9cb28b57de-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 07:09:59.123950 systemd[1]: var-lib-kubelet-pods-99077a63\x2d9db7\x2d4cec\x2da6a2\x2daf9cb28b57de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94rbl.mount: Deactivated successfully. Aug 13 07:09:59.129957 kubelet[2669]: I0813 07:09:59.125919 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99077a63-9db7-4cec-a6a2-af9cb28b57de-kube-api-access-94rbl" (OuterVolumeSpecName: "kube-api-access-94rbl") pod "99077a63-9db7-4cec-a6a2-af9cb28b57de" (UID: "99077a63-9db7-4cec-a6a2-af9cb28b57de"). InnerVolumeSpecName "kube-api-access-94rbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:09:59.129957 kubelet[2669]: I0813 07:09:59.126854 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99077a63-9db7-4cec-a6a2-af9cb28b57de-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "99077a63-9db7-4cec-a6a2-af9cb28b57de" (UID: "99077a63-9db7-4cec-a6a2-af9cb28b57de"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:09:59.135746 kubelet[2669]: I0813 07:09:59.135355 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94rbl\" (UniqueName: \"kubernetes.io/projected/99077a63-9db7-4cec-a6a2-af9cb28b57de-kube-api-access-94rbl\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:59.135746 kubelet[2669]: I0813 07:09:59.135405 2669 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99077a63-9db7-4cec-a6a2-af9cb28b57de-calico-apiserver-certs\") on node \"ci-4081.3.5-e-55e36c071a\" DevicePath \"\"" Aug 13 07:09:59.302714 sshd[6489]: pam_unix(sshd:session): session closed for user core Aug 13 07:09:59.343450 systemd[1]: sshd@17-165.232.152.216:22-139.178.89.65:51530.service: Deactivated successfully. Aug 13 07:09:59.362405 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:09:59.365593 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:09:59.377192 systemd[1]: Started sshd@18-165.232.152.216:22-139.178.89.65:60854.service - OpenSSH per-connection server daemon (139.178.89.65:60854). Aug 13 07:09:59.383040 systemd-logind[1556]: Removed session 18. Aug 13 07:09:59.455328 sshd[6620]: Accepted publickey for core from 139.178.89.65 port 60854 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:09:59.458225 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:09:59.472773 systemd-logind[1556]: New session 19 of user core. Aug 13 07:09:59.479282 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:09:59.511766 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:09:59.511361 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:09:59.511375 systemd-resolved[1480]: Flushed all caches. Aug 13 07:10:00.309275 sshd[6620]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:00.319455 systemd[1]: Started sshd@19-165.232.152.216:22-139.178.89.65:60856.service - OpenSSH per-connection server daemon (139.178.89.65:60856). Aug 13 07:10:00.325993 systemd[1]: sshd@18-165.232.152.216:22-139.178.89.65:60854.service: Deactivated successfully. Aug 13 07:10:00.343168 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:10:00.349668 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:10:00.353549 systemd-logind[1556]: Removed session 19. Aug 13 07:10:00.397458 sshd[6630]: Accepted publickey for core from 139.178.89.65 port 60856 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:00.399739 sshd[6630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:00.406388 systemd-logind[1556]: New session 20 of user core. Aug 13 07:10:00.414351 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:10:00.578690 sshd[6630]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:00.583945 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:10:00.587086 systemd[1]: sshd@19-165.232.152.216:22-139.178.89.65:60856.service: Deactivated successfully. Aug 13 07:10:00.591944 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:10:00.594113 systemd-logind[1556]: Removed session 20. Aug 13 07:10:00.759550 kubelet[2669]: I0813 07:10:00.743298 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99077a63-9db7-4cec-a6a2-af9cb28b57de" path="/var/lib/kubelet/pods/99077a63-9db7-4cec-a6a2-af9cb28b57de/volumes" Aug 13 07:10:05.463826 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:10:05.464857 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:10:05.463835 systemd-resolved[1480]: Flushed all caches. Aug 13 07:10:05.589245 systemd[1]: Started sshd@20-165.232.152.216:22-139.178.89.65:60870.service - OpenSSH per-connection server daemon (139.178.89.65:60870). Aug 13 07:10:05.724064 sshd[6652]: Accepted publickey for core from 139.178.89.65 port 60870 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:05.726653 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:05.733136 systemd-logind[1556]: New session 21 of user core. Aug 13 07:10:05.739493 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:10:06.028136 sshd[6652]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:06.039740 systemd[1]: sshd@20-165.232.152.216:22-139.178.89.65:60870.service: Deactivated successfully. Aug 13 07:10:06.046503 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:10:06.049215 systemd-logind[1556]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:10:06.051882 systemd-logind[1556]: Removed session 21. Aug 13 07:10:08.793179 systemd[1]: run-containerd-runc-k8s.io-335e116abb9a31b849c082c1e008bf81f1db3fa635334de7515f9f0dd51016ea-runc.Hk6pa2.mount: Deactivated successfully. Aug 13 07:10:11.040115 systemd[1]: Started sshd@21-165.232.152.216:22-139.178.89.65:33438.service - OpenSSH per-connection server daemon (139.178.89.65:33438). Aug 13 07:10:11.154025 sshd[6713]: Accepted publickey for core from 139.178.89.65 port 33438 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:11.156893 sshd[6713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:11.163255 systemd-logind[1556]: New session 22 of user core. Aug 13 07:10:11.170275 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:10:11.407268 sshd[6713]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:11.412440 systemd[1]: sshd@21-165.232.152.216:22-139.178.89.65:33438.service: Deactivated successfully. Aug 13 07:10:11.420026 systemd-journald[1137]: Under memory pressure, flushing caches. Aug 13 07:10:11.414973 systemd-resolved[1480]: Under memory pressure, flushing caches. Aug 13 07:10:11.414984 systemd-resolved[1480]: Flushed all caches. Aug 13 07:10:11.424316 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:10:11.424851 systemd-logind[1556]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:10:11.426125 systemd-logind[1556]: Removed session 22. Aug 13 07:10:16.418599 systemd[1]: Started sshd@22-165.232.152.216:22-139.178.89.65:33450.service - OpenSSH per-connection server daemon (139.178.89.65:33450). Aug 13 07:10:16.531474 sshd[6728]: Accepted publickey for core from 139.178.89.65 port 33450 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:16.532806 sshd[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:16.550050 systemd-logind[1556]: New session 23 of user core. Aug 13 07:10:16.553611 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:10:16.993514 sshd[6728]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:17.003207 systemd[1]: sshd@22-165.232.152.216:22-139.178.89.65:33450.service: Deactivated successfully. Aug 13 07:10:17.016316 systemd-logind[1556]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:10:17.017122 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:10:17.022055 systemd-logind[1556]: Removed session 23. Aug 13 07:10:22.008009 systemd[1]: Started sshd@23-165.232.152.216:22-139.178.89.65:46842.service - OpenSSH per-connection server daemon (139.178.89.65:46842). Aug 13 07:10:22.113960 sshd[6743]: Accepted publickey for core from 139.178.89.65 port 46842 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:10:22.120278 sshd[6743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:10:22.134511 systemd-logind[1556]: New session 24 of user core. Aug 13 07:10:22.142107 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:10:22.497639 sshd[6743]: pam_unix(sshd:session): session closed for user core Aug 13 07:10:22.509692 systemd[1]: sshd@23-165.232.152.216:22-139.178.89.65:46842.service: Deactivated successfully. Aug 13 07:10:22.516766 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:10:22.518522 systemd-logind[1556]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:10:22.520345 systemd-logind[1556]: Removed session 24. Aug 13 07:10:22.741816 kubelet[2669]: E0813 07:10:22.741611 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:23.733897 kubelet[2669]: E0813 07:10:23.733849 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Aug 13 07:10:24.345705 systemd[1]: run-containerd-runc-k8s.io-1afe514e61f4b930d204016d50d9d404f051e6818ee721d62ee003dea2afd402-runc.cugVi2.mount: Deactivated successfully.