Aug 13 07:11:40.972070 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:11:40.972096 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:11:40.972109 kernel: BIOS-provided physical RAM map: Aug 13 07:11:40.972116 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:11:40.972123 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:11:40.972129 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:11:40.972137 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 07:11:40.972144 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 07:11:40.972151 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:11:40.972160 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:11:40.972167 kernel: NX (Execute Disable) protection: active Aug 13 07:11:40.976369 kernel: APIC: Static calls initialized Aug 13 07:11:40.976404 kernel: SMBIOS 2.8 present. Aug 13 07:11:40.976417 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 07:11:40.976432 kernel: Hypervisor detected: KVM Aug 13 07:11:40.976459 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:11:40.976474 kernel: kvm-clock: using sched offset of 2978322622 cycles Aug 13 07:11:40.976488 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:11:40.976513 kernel: tsc: Detected 2494.140 MHz processor Aug 13 07:11:40.976525 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:11:40.976539 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:11:40.976552 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 07:11:40.976560 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:11:40.976568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:11:40.976580 kernel: ACPI: Early table checksum verification disabled Aug 13 07:11:40.976588 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 07:11:40.976596 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976604 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976613 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976621 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 07:11:40.976628 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976636 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976647 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976662 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:11:40.976674 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 07:11:40.976686 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 07:11:40.976698 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 07:11:40.976710 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 07:11:40.976723 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 07:11:40.976735 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 07:11:40.976757 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 07:11:40.976770 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:11:40.976783 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:11:40.976797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:11:40.976810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:11:40.976830 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 07:11:40.976845 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 07:11:40.976863 kernel: Zone ranges: Aug 13 07:11:40.976879 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:11:40.976893 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 07:11:40.976908 kernel: Normal empty Aug 13 07:11:40.976922 kernel: Movable zone start for each node Aug 13 07:11:40.976938 kernel: Early memory node ranges Aug 13 07:11:40.976952 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:11:40.976967 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 07:11:40.976976 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 07:11:40.976988 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:11:40.976997 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:11:40.977008 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 07:11:40.977017 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:11:40.977026 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:11:40.977034 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:11:40.977042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:11:40.977051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:11:40.977060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:11:40.977071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:11:40.977079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:11:40.977087 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:11:40.977096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 07:11:40.977104 kernel: TSC deadline timer available Aug 13 07:11:40.977113 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 07:11:40.977121 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:11:40.977130 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 07:11:40.977140 kernel: Booting paravirtualized kernel on KVM Aug 13 07:11:40.977152 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:11:40.977160 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 07:11:40.977169 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 07:11:40.977193 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 07:11:40.977201 kernel: pcpu-alloc: [0] 0 1 Aug 13 07:11:40.977209 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 07:11:40.977220 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:11:40.977229 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:11:40.977241 kernel: random: crng init done Aug 13 07:11:40.977249 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:11:40.977257 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:11:40.977266 kernel: Fallback order for Node 0: 0 Aug 13 07:11:40.977274 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 07:11:40.977282 kernel: Policy zone: DMA32 Aug 13 07:11:40.977292 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:11:40.977307 kernel: Memory: 1971208K/2096612K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 125148K reserved, 0K cma-reserved) Aug 13 07:11:40.977320 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 07:11:40.977336 kernel: Kernel/User page tables isolation: enabled Aug 13 07:11:40.977348 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:11:40.977360 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:11:40.977373 kernel: Dynamic Preempt: voluntary Aug 13 07:11:40.977385 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:11:40.977399 kernel: rcu: RCU event tracing is enabled. Aug 13 07:11:40.977413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 07:11:40.977427 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:11:40.977440 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:11:40.977453 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:11:40.977462 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:11:40.977470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 07:11:40.977478 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 07:11:40.977487 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:11:40.977499 kernel: Console: colour VGA+ 80x25 Aug 13 07:11:40.977509 kernel: printk: console [tty0] enabled Aug 13 07:11:40.977517 kernel: printk: console [ttyS0] enabled Aug 13 07:11:40.977526 kernel: ACPI: Core revision 20230628 Aug 13 07:11:40.977534 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 07:11:40.977546 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:11:40.977555 kernel: x2apic enabled Aug 13 07:11:40.977563 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:11:40.977572 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 07:11:40.977580 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 07:11:40.977589 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 07:11:40.977598 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:11:40.977606 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:11:40.977626 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:11:40.977635 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:11:40.977644 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:11:40.977656 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:11:40.977665 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:11:40.977674 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:11:40.977683 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:11:40.977691 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 07:11:40.977700 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:11:40.977715 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:11:40.977724 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:11:40.977733 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:11:40.977742 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:11:40.977751 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:11:40.977760 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:11:40.977769 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:11:40.977778 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:11:40.977790 kernel: landlock: Up and running. Aug 13 07:11:40.977799 kernel: SELinux: Initializing. Aug 13 07:11:40.977808 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:11:40.977817 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:11:40.977826 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 07:11:40.977835 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:11:40.977845 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:11:40.977854 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 07:11:40.977866 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 07:11:40.977875 kernel: signal: max sigframe size: 1776 Aug 13 07:11:40.977884 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:11:40.977893 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:11:40.977902 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:11:40.977911 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:11:40.977920 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:11:40.977929 kernel: .... node #0, CPUs: #1 Aug 13 07:11:40.977938 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:11:40.977949 kernel: smpboot: Max logical packages: 1 Aug 13 07:11:40.977961 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 07:11:40.977970 kernel: devtmpfs: initialized Aug 13 07:11:40.977979 kernel: x86/mm: Memory block size: 128MB Aug 13 07:11:40.977988 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:11:40.977997 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 07:11:40.978006 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:11:40.978015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:11:40.978024 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:11:40.978033 kernel: audit: type=2000 audit(1755069099.566:1): state=initialized audit_enabled=0 res=1 Aug 13 07:11:40.978045 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:11:40.978054 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:11:40.978063 kernel: cpuidle: using governor menu Aug 13 07:11:40.978072 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:11:40.978081 kernel: dca service started, version 1.12.1 Aug 13 07:11:40.978090 kernel: PCI: Using configuration type 1 for base access Aug 13 07:11:40.978099 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:11:40.978108 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:11:40.978142 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:11:40.978155 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:11:40.978164 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:11:40.978194 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:11:40.978208 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:11:40.978222 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:11:40.978237 kernel: ACPI: Interpreter enabled Aug 13 07:11:40.978250 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:11:40.978263 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:11:40.978279 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:11:40.978299 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:11:40.978315 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 07:11:40.978329 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:11:40.978614 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:11:40.978739 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 07:11:40.978849 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 07:11:40.978862 kernel: acpiphp: Slot [3] registered Aug 13 07:11:40.978877 kernel: acpiphp: Slot [4] registered Aug 13 07:11:40.978886 kernel: acpiphp: Slot [5] registered Aug 13 07:11:40.978895 kernel: acpiphp: Slot [6] registered Aug 13 07:11:40.978904 kernel: acpiphp: Slot [7] registered Aug 13 07:11:40.978913 kernel: acpiphp: Slot [8] registered Aug 13 07:11:40.978922 kernel: acpiphp: Slot [9] registered Aug 13 07:11:40.978931 kernel: acpiphp: Slot [10] registered Aug 13 07:11:40.978940 kernel: acpiphp: Slot [11] registered Aug 13 07:11:40.978949 kernel: acpiphp: Slot [12] registered Aug 13 07:11:40.978961 kernel: acpiphp: Slot [13] registered Aug 13 07:11:40.978970 kernel: acpiphp: Slot [14] registered Aug 13 07:11:40.978979 kernel: acpiphp: Slot [15] registered Aug 13 07:11:40.978988 kernel: acpiphp: Slot [16] registered Aug 13 07:11:40.978997 kernel: acpiphp: Slot [17] registered Aug 13 07:11:40.979005 kernel: acpiphp: Slot [18] registered Aug 13 07:11:40.979014 kernel: acpiphp: Slot [19] registered Aug 13 07:11:40.979023 kernel: acpiphp: Slot [20] registered Aug 13 07:11:40.979032 kernel: acpiphp: Slot [21] registered Aug 13 07:11:40.979041 kernel: acpiphp: Slot [22] registered Aug 13 07:11:40.979053 kernel: acpiphp: Slot [23] registered Aug 13 07:11:40.979062 kernel: acpiphp: Slot [24] registered Aug 13 07:11:40.979071 kernel: acpiphp: Slot [25] registered Aug 13 07:11:40.979081 kernel: acpiphp: Slot [26] registered Aug 13 07:11:40.979094 kernel: acpiphp: Slot [27] registered Aug 13 07:11:40.979107 kernel: acpiphp: Slot [28] registered Aug 13 07:11:40.979118 kernel: acpiphp: Slot [29] registered Aug 13 07:11:40.979127 kernel: acpiphp: Slot [30] registered Aug 13 07:11:40.979136 kernel: acpiphp: Slot [31] registered Aug 13 07:11:40.979149 kernel: PCI host bridge to bus 0000:00 Aug 13 07:11:40.979296 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:11:40.979389 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:11:40.979476 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:11:40.979563 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 07:11:40.979650 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 07:11:40.979772 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:11:40.979943 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 07:11:40.980130 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 07:11:40.982483 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 07:11:40.982687 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 07:11:40.982843 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 07:11:40.982994 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 07:11:40.983153 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 07:11:40.983340 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 07:11:40.983512 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 07:11:40.983678 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 07:11:40.983881 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 07:11:40.984038 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 07:11:40.986328 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 07:11:40.986553 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 07:11:40.986720 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 07:11:40.986870 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 07:11:40.987014 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 07:11:40.987160 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 07:11:40.987319 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:11:40.987498 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:11:40.987650 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 07:11:40.987800 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 07:11:40.987963 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 07:11:40.988158 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:11:40.990397 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 07:11:40.990509 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 07:11:40.990616 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 07:11:40.990730 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 07:11:40.990828 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 07:11:40.990940 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 07:11:40.991096 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 07:11:40.993335 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:11:40.993465 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:11:40.993575 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 07:11:40.993671 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 07:11:40.993784 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:11:40.993882 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 07:11:40.994072 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 07:11:40.994310 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 07:11:40.994444 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 07:11:40.994563 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 07:11:40.994658 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 07:11:40.994671 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:11:40.994680 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:11:40.994689 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:11:40.994699 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:11:40.994707 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 07:11:40.994721 kernel: iommu: Default domain type: Translated Aug 13 07:11:40.994730 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:11:40.994738 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:11:40.994747 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:11:40.994756 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:11:40.994765 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 07:11:40.994863 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 07:11:40.994996 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 07:11:40.995127 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:11:40.995146 kernel: vgaarb: loaded Aug 13 07:11:40.995155 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 07:11:40.995165 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 07:11:40.995188 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:11:40.995203 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:11:40.995216 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:11:40.995231 kernel: pnp: PnP ACPI init Aug 13 07:11:40.995244 kernel: pnp: PnP ACPI: found 4 devices Aug 13 07:11:40.995257 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:11:40.995280 kernel: NET: Registered PF_INET protocol family Aug 13 07:11:40.995294 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:11:40.995337 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:11:40.995352 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:11:40.995361 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:11:40.995370 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:11:40.995380 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:11:40.995389 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:11:40.995398 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:11:40.995413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:11:40.995422 kernel: NET: Registered PF_XDP protocol family Aug 13 07:11:40.995549 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:11:40.995637 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:11:40.995750 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:11:40.995884 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 07:11:40.995989 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 07:11:40.996095 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 07:11:40.996313 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 07:11:40.996331 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 07:11:40.996434 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 38559 usecs Aug 13 07:11:40.996447 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:11:40.996456 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:11:40.996466 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 07:11:40.996475 kernel: Initialise system trusted keyrings Aug 13 07:11:40.996484 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:11:40.996513 kernel: Key type asymmetric registered Aug 13 07:11:40.996527 kernel: Asymmetric key parser 'x509' registered Aug 13 07:11:40.996540 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:11:40.996553 kernel: io scheduler mq-deadline registered Aug 13 07:11:40.996563 kernel: io scheduler kyber registered Aug 13 07:11:40.996572 kernel: io scheduler bfq registered Aug 13 07:11:40.996581 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:11:40.996590 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 07:11:40.996600 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 07:11:40.996612 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 07:11:40.996621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:11:40.996630 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:11:40.996639 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:11:40.996649 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:11:40.996658 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:11:40.996806 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:11:40.996820 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:11:40.996910 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:11:40.997004 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:11:40 UTC (1755069100) Aug 13 07:11:40.997093 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:11:40.997105 kernel: intel_pstate: CPU model not supported Aug 13 07:11:40.997114 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:11:40.997123 kernel: Segment Routing with IPv6 Aug 13 07:11:40.997132 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:11:40.997141 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:11:40.997154 kernel: Key type dns_resolver registered Aug 13 07:11:40.997184 kernel: IPI shorthand broadcast: enabled Aug 13 07:11:40.997194 kernel: sched_clock: Marking stable (1025004204, 119267403)->(1260363622, -116092015) Aug 13 07:11:40.997203 kernel: registered taskstats version 1 Aug 13 07:11:40.997212 kernel: Loading compiled-in X.509 certificates Aug 13 07:11:40.997221 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:11:40.997231 kernel: Key type .fscrypt registered Aug 13 07:11:40.997239 kernel: Key type fscrypt-provisioning registered Aug 13 07:11:40.997248 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:11:40.997257 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:11:40.997270 kernel: ima: No architecture policies found Aug 13 07:11:40.997279 kernel: clk: Disabling unused clocks Aug 13 07:11:40.997288 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:11:40.997297 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:11:40.997306 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:11:40.997337 kernel: Run /init as init process Aug 13 07:11:40.997349 kernel: with arguments: Aug 13 07:11:40.997359 kernel: /init Aug 13 07:11:40.997376 kernel: with environment: Aug 13 07:11:40.997389 kernel: HOME=/ Aug 13 07:11:40.997398 kernel: TERM=linux Aug 13 07:11:40.997408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:11:40.997420 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:11:40.997433 systemd[1]: Detected virtualization kvm. Aug 13 07:11:40.997443 systemd[1]: Detected architecture x86-64. Aug 13 07:11:40.997452 systemd[1]: Running in initrd. Aug 13 07:11:40.997465 systemd[1]: No hostname configured, using default hostname. Aug 13 07:11:40.997475 systemd[1]: Hostname set to . Aug 13 07:11:40.997485 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:11:40.997494 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:11:40.997505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:11:40.997514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:11:40.997525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:11:40.997535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:11:40.997548 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:11:40.997558 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:11:40.997570 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:11:40.997580 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:11:40.997590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:11:40.997600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:11:40.997609 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:11:40.997625 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:11:40.997639 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:11:40.997649 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:11:40.997662 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:11:40.997673 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:11:40.997683 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:11:40.997695 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:11:40.997706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:11:40.997716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:11:40.997726 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:11:40.997736 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:11:40.997746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:11:40.997756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:11:40.997766 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:11:40.997779 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:11:40.997789 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:11:40.997799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:11:40.997809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:11:40.997819 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:11:40.997829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:11:40.997839 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:11:40.997853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:11:40.997896 systemd-journald[183]: Collecting audit messages is disabled. Aug 13 07:11:40.997938 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:11:40.997955 systemd-journald[183]: Journal started Aug 13 07:11:40.997985 systemd-journald[183]: Runtime Journal (/run/log/journal/841643b469a34b4a82a92c2c8b63e790) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:11:40.980694 systemd-modules-load[184]: Inserted module 'overlay' Aug 13 07:11:41.016583 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:11:41.014846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:41.023201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:11:41.026117 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 13 07:11:41.026690 kernel: Bridge firewalling registered Aug 13 07:11:41.026641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:11:41.028364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:11:41.031673 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:11:41.036695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:11:41.049849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:11:41.053494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:11:41.063376 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:11:41.067906 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:11:41.071037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:11:41.076674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:11:41.084197 dracut-cmdline[212]: dracut-dracut-053 Aug 13 07:11:41.085114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:11:41.086734 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:11:41.125577 systemd-resolved[224]: Positive Trust Anchors: Aug 13 07:11:41.126292 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:11:41.126809 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:11:41.132228 systemd-resolved[224]: Defaulting to hostname 'linux'. Aug 13 07:11:41.133855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:11:41.134754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:11:41.184235 kernel: SCSI subsystem initialized Aug 13 07:11:41.195238 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:11:41.208297 kernel: iscsi: registered transport (tcp) Aug 13 07:11:41.232248 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:11:41.232340 kernel: QLogic iSCSI HBA Driver Aug 13 07:11:41.292392 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:11:41.303458 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:11:41.334393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:11:41.334480 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:11:41.334500 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:11:41.384264 kernel: raid6: avx2x4 gen() 14843 MB/s Aug 13 07:11:41.400247 kernel: raid6: avx2x2 gen() 15786 MB/s Aug 13 07:11:41.417626 kernel: raid6: avx2x1 gen() 11296 MB/s Aug 13 07:11:41.417732 kernel: raid6: using algorithm avx2x2 gen() 15786 MB/s Aug 13 07:11:41.435492 kernel: raid6: .... xor() 16636 MB/s, rmw enabled Aug 13 07:11:41.435573 kernel: raid6: using avx2x2 recovery algorithm Aug 13 07:11:41.458220 kernel: xor: automatically using best checksumming function avx Aug 13 07:11:41.636216 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:11:41.652132 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:11:41.658480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:11:41.686145 systemd-udevd[403]: Using default interface naming scheme 'v255'. Aug 13 07:11:41.693346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:11:41.702490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:11:41.725832 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 13 07:11:41.765246 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:11:41.771437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:11:41.852275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:11:41.857343 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:11:41.889406 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:11:41.891911 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:11:41.893901 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:11:41.895083 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:11:41.900367 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:11:41.918377 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:11:41.945890 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:11:41.945951 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 07:11:41.957199 kernel: scsi host0: Virtio SCSI HBA Aug 13 07:11:41.959999 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:11:41.977200 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 07:11:41.981850 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:11:41.981922 kernel: GPT:9289727 != 125829119 Aug 13 07:11:41.981936 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:11:41.981948 kernel: GPT:9289727 != 125829119 Aug 13 07:11:41.981960 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:11:41.981972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:11:41.981661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:11:41.981818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:11:41.988369 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:11:41.988820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:11:41.989015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:41.989495 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:11:41.998689 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 07:11:41.998998 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 07:11:42.000543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:11:42.004862 kernel: AES CTR mode by8 optimization enabled Aug 13 07:11:42.050244 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (454) Aug 13 07:11:42.056266 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (452) Aug 13 07:11:42.058711 kernel: libata version 3.00 loaded. Aug 13 07:11:42.080225 kernel: ACPI: bus type USB registered Aug 13 07:11:42.080301 kernel: usbcore: registered new interface driver usbfs Aug 13 07:11:42.080325 kernel: usbcore: registered new interface driver hub Aug 13 07:11:42.080346 kernel: usbcore: registered new device driver usb Aug 13 07:11:42.117715 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:11:42.137715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:42.145209 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 07:11:42.147196 kernel: scsi host1: ata_piix Aug 13 07:11:42.153003 kernel: scsi host2: ata_piix Aug 13 07:11:42.153295 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 07:11:42.153317 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 07:11:42.162139 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:11:42.170755 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:11:42.171446 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:11:42.179266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:11:42.187508 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:11:42.190901 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 07:11:42.191137 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 07:11:42.192794 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 07:11:42.193019 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 07:11:42.193991 kernel: hub 1-0:1.0: USB hub found Aug 13 07:11:42.195326 kernel: hub 1-0:1.0: 2 ports detected Aug 13 07:11:42.195418 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:11:42.210239 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:11:42.210387 disk-uuid[541]: Primary Header is updated. Aug 13 07:11:42.210387 disk-uuid[541]: Secondary Entries is updated. Aug 13 07:11:42.210387 disk-uuid[541]: Secondary Header is updated. Aug 13 07:11:42.224409 kernel: GPT:disk_guids don't match. Aug 13 07:11:42.224583 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:11:42.224612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:11:42.232771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:11:42.241206 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:11:43.231347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:11:43.231830 disk-uuid[543]: The operation has completed successfully. Aug 13 07:11:43.278838 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:11:43.278961 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:11:43.293445 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:11:43.299146 sh[564]: Success Aug 13 07:11:43.315209 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 07:11:43.384427 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:11:43.387348 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:11:43.388583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:11:43.421371 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:11:43.421439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:11:43.421455 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:11:43.423407 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:11:43.423475 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:11:43.433389 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:11:43.434814 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:11:43.446450 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:11:43.450562 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:11:43.459486 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:11:43.459561 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:11:43.459575 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:11:43.462199 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:11:43.474875 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:11:43.476849 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:11:43.482255 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:11:43.489468 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:11:43.614113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:11:43.628578 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:11:43.659329 ignition[645]: Ignition 2.19.0 Aug 13 07:11:43.659343 ignition[645]: Stage: fetch-offline Aug 13 07:11:43.659401 ignition[645]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:43.659414 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:43.659542 ignition[645]: parsed url from cmdline: "" Aug 13 07:11:43.659548 ignition[645]: no config URL provided Aug 13 07:11:43.659556 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:11:43.659567 ignition[645]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:11:43.665271 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:11:43.659575 ignition[645]: failed to fetch config: resource requires networking Aug 13 07:11:43.659846 ignition[645]: Ignition finished successfully Aug 13 07:11:43.676964 systemd-networkd[752]: lo: Link UP Aug 13 07:11:43.676979 systemd-networkd[752]: lo: Gained carrier Aug 13 07:11:43.679440 systemd-networkd[752]: Enumeration completed Aug 13 07:11:43.679837 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:11:43.679841 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 07:11:43.681443 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:11:43.681991 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:11:43.681995 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:11:43.682539 systemd[1]: Reached target network.target - Network. Aug 13 07:11:43.683018 systemd-networkd[752]: eth0: Link UP Aug 13 07:11:43.683026 systemd-networkd[752]: eth0: Gained carrier Aug 13 07:11:43.683043 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 07:11:43.687619 systemd-networkd[752]: eth1: Link UP Aug 13 07:11:43.687625 systemd-networkd[752]: eth1: Gained carrier Aug 13 07:11:43.687643 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:11:43.689407 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:11:43.698295 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.28/20 acquired from 169.254.169.253 Aug 13 07:11:43.702313 systemd-networkd[752]: eth0: DHCPv4 address 137.184.36.62/20, gateway 137.184.32.1 acquired from 169.254.169.253 Aug 13 07:11:43.711188 ignition[756]: Ignition 2.19.0 Aug 13 07:11:43.711876 ignition[756]: Stage: fetch Aug 13 07:11:43.712098 ignition[756]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:43.712110 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:43.712230 ignition[756]: parsed url from cmdline: "" Aug 13 07:11:43.712233 ignition[756]: no config URL provided Aug 13 07:11:43.712239 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:11:43.712248 ignition[756]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:11:43.712268 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 07:11:43.743950 ignition[756]: GET result: OK Aug 13 07:11:43.744100 ignition[756]: parsing config with SHA512: 87e5cedf0217ac13fa278bb6ee1050f0cc038531ffe601ddbc93cbf083e73baa65edb134e9ea839c7949a602cff6af88ed8a0998c496b68bc5912172e547f317 Aug 13 07:11:43.749016 unknown[756]: fetched base config from "system" Aug 13 07:11:43.749432 unknown[756]: fetched base config from "system" Aug 13 07:11:43.750067 ignition[756]: fetch: fetch complete Aug 13 07:11:43.749443 unknown[756]: fetched user config from "digitalocean" Aug 13 07:11:43.750076 ignition[756]: fetch: fetch passed Aug 13 07:11:43.750149 ignition[756]: Ignition finished successfully Aug 13 07:11:43.753573 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:11:43.757416 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:11:43.796774 ignition[764]: Ignition 2.19.0 Aug 13 07:11:43.796785 ignition[764]: Stage: kargs Aug 13 07:11:43.796986 ignition[764]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:43.796997 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:43.797956 ignition[764]: kargs: kargs passed Aug 13 07:11:43.799956 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:11:43.798010 ignition[764]: Ignition finished successfully Aug 13 07:11:43.804416 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:11:43.843369 ignition[771]: Ignition 2.19.0 Aug 13 07:11:43.843385 ignition[771]: Stage: disks Aug 13 07:11:43.843648 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:43.843665 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:43.847505 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:11:43.844739 ignition[771]: disks: disks passed Aug 13 07:11:43.844802 ignition[771]: Ignition finished successfully Aug 13 07:11:43.852701 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:11:43.853128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:11:43.854016 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:11:43.854859 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:11:43.855749 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:11:43.865450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:11:43.883814 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:11:43.886997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:11:43.891626 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:11:44.016852 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:11:44.016411 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:11:44.018323 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:11:44.029376 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:11:44.032588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:11:44.034113 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Aug 13 07:11:44.037616 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 07:11:44.042485 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:11:44.050250 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (787) Aug 13 07:11:44.050287 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:11:44.050305 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:11:44.050332 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:11:44.042531 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:11:44.056210 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:11:44.057881 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:11:44.060595 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:11:44.073426 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:11:44.149216 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:11:44.156259 coreos-metadata[789]: Aug 13 07:11:44.156 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:11:44.161818 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:11:44.163391 coreos-metadata[790]: Aug 13 07:11:44.163 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:11:44.169089 coreos-metadata[789]: Aug 13 07:11:44.168 INFO Fetch successful Aug 13 07:11:44.171818 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:11:44.176224 coreos-metadata[790]: Aug 13 07:11:44.175 INFO Fetch successful Aug 13 07:11:44.179451 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:11:44.180081 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 07:11:44.180230 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Aug 13 07:11:44.183640 coreos-metadata[790]: Aug 13 07:11:44.183 INFO wrote hostname ci-4081.3.5-a-2a2ab8bcea to /sysroot/etc/hostname Aug 13 07:11:44.184746 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:11:44.299086 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:11:44.305357 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:11:44.307367 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:11:44.320245 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:11:44.345635 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:11:44.359608 ignition[908]: INFO : Ignition 2.19.0 Aug 13 07:11:44.362307 ignition[908]: INFO : Stage: mount Aug 13 07:11:44.362307 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:44.362307 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:44.364091 ignition[908]: INFO : mount: mount passed Aug 13 07:11:44.364722 ignition[908]: INFO : Ignition finished successfully Aug 13 07:11:44.366364 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:11:44.373335 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:11:44.420916 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:11:44.427529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:11:44.449480 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Aug 13 07:11:44.449551 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:11:44.451455 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:11:44.451516 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:11:44.455211 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:11:44.458087 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:11:44.495831 ignition[936]: INFO : Ignition 2.19.0 Aug 13 07:11:44.495831 ignition[936]: INFO : Stage: files Aug 13 07:11:44.497023 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:44.497023 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:44.498053 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:11:44.498771 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:11:44.498771 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:11:44.503096 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:11:44.503820 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:11:44.503820 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:11:44.503763 unknown[936]: wrote ssh authorized keys file for user: core Aug 13 07:11:44.505724 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:11:44.505724 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:11:44.553603 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:11:44.656472 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:11:44.658232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:11:44.658232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:11:44.658232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:11:44.660825 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:11:45.092316 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 07:11:45.204559 systemd-networkd[752]: eth0: Gained IPv6LL Aug 13 07:11:45.363569 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:11:45.363569 ignition[936]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 07:11:45.364997 ignition[936]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:11:45.364997 ignition[936]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:11:45.364997 ignition[936]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 07:11:45.364997 ignition[936]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:11:45.364997 ignition[936]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:11:45.364997 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:11:45.368742 ignition[936]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:11:45.368742 ignition[936]: INFO : files: files passed Aug 13 07:11:45.368742 ignition[936]: INFO : Ignition finished successfully Aug 13 07:11:45.366315 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:11:45.373490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:11:45.376363 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:11:45.380999 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:11:45.381555 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:11:45.395331 systemd-networkd[752]: eth1: Gained IPv6LL Aug 13 07:11:45.398230 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:11:45.398230 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:11:45.400917 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:11:45.403501 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:11:45.404878 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:11:45.410461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:11:45.454668 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:11:45.455358 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:11:45.456611 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:11:45.457162 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:11:45.458334 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:11:45.469918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:11:45.485431 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:11:45.491537 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:11:45.506574 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:11:45.507112 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:11:45.507912 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:11:45.509124 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:11:45.509280 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:11:45.510331 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:11:45.510909 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:11:45.511530 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:11:45.512130 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:11:45.512849 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:11:45.513530 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:11:45.514316 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:11:45.515060 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:11:45.515905 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:11:45.516838 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:11:45.517603 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:11:45.517753 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:11:45.518676 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:11:45.519457 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:11:45.520648 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:11:45.520765 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:11:45.521684 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:11:45.521833 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:11:45.523017 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:11:45.523190 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:11:45.523974 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:11:45.524079 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:11:45.524727 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 07:11:45.524863 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 07:11:45.534590 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:11:45.536591 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:11:45.536829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:11:45.541887 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:11:45.543046 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:11:45.543681 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:11:45.545145 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:11:45.545295 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:11:45.558278 ignition[988]: INFO : Ignition 2.19.0 Aug 13 07:11:45.558278 ignition[988]: INFO : Stage: umount Aug 13 07:11:45.558278 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:11:45.558278 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 07:11:45.558459 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:11:45.558609 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:11:45.564109 ignition[988]: INFO : umount: umount passed Aug 13 07:11:45.564109 ignition[988]: INFO : Ignition finished successfully Aug 13 07:11:45.565324 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:11:45.565441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:11:45.569856 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:11:45.569919 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:11:45.571025 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:11:45.571103 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:11:45.573683 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:11:45.573739 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:11:45.574252 systemd[1]: Stopped target network.target - Network. Aug 13 07:11:45.574807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:11:45.574859 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:11:45.575719 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:11:45.576442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:11:45.580272 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:11:45.580721 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:11:45.581601 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:11:45.583449 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:11:45.583501 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:11:45.584039 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:11:45.584080 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:11:45.584679 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:11:45.584738 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:11:45.585420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:11:45.585466 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:11:45.587755 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:11:45.604725 systemd-networkd[752]: eth1: DHCPv6 lease lost Aug 13 07:11:45.613498 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:11:45.616798 systemd-networkd[752]: eth0: DHCPv6 lease lost Aug 13 07:11:45.617000 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:11:45.618585 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:11:45.618720 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:11:45.622151 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:11:45.622696 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:11:45.638562 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:11:45.639461 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:11:45.639575 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:11:45.641348 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:11:45.645294 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:11:45.645468 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:11:45.647924 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:11:45.648070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:11:45.654886 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:11:45.655052 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:11:45.656559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:11:45.656630 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:11:45.657828 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:11:45.657888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:11:45.658971 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:11:45.659035 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:11:45.665282 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:11:45.665507 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:11:45.667778 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:11:45.667923 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:11:45.670305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:11:45.670390 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:11:45.671612 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:11:45.671677 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:11:45.672424 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:11:45.672598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:11:45.673829 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:11:45.673894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:11:45.674656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:11:45.674710 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:11:45.685128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:11:45.686015 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:11:45.686111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:11:45.686619 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:11:45.686695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:45.692840 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:11:45.692964 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:11:45.693928 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:11:45.698456 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:11:45.711597 systemd[1]: Switching root. Aug 13 07:11:45.750353 systemd-journald[183]: Journal stopped Aug 13 07:11:46.831776 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 13 07:11:46.831886 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:11:46.831909 kernel: SELinux: policy capability open_perms=1 Aug 13 07:11:46.831934 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:11:46.831960 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:11:46.831982 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:11:46.832007 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:11:46.832027 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:11:46.832063 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:11:46.832083 kernel: audit: type=1403 audit(1755069105.900:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:11:46.832103 systemd[1]: Successfully loaded SELinux policy in 41.943ms. Aug 13 07:11:46.832134 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.101ms. Aug 13 07:11:46.832158 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:11:46.832242 systemd[1]: Detected virtualization kvm. Aug 13 07:11:46.832266 systemd[1]: Detected architecture x86-64. Aug 13 07:11:46.832287 systemd[1]: Detected first boot. Aug 13 07:11:46.832309 systemd[1]: Hostname set to . Aug 13 07:11:46.832336 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:11:46.832358 zram_generator::config[1031]: No configuration found. Aug 13 07:11:46.832381 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:11:46.832402 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:11:46.832436 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:11:46.832462 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:11:46.832486 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:11:46.832508 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:11:46.832533 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:11:46.832556 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:11:46.832580 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:11:46.832602 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:11:46.832623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:11:46.832646 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:11:46.832669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:11:46.832690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:11:46.832712 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:11:46.832738 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:11:46.832761 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:11:46.832783 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:11:46.832802 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:11:46.832822 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:11:46.832845 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:11:46.832869 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:11:46.832897 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:11:46.832918 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:11:46.832938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:11:46.832959 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:11:46.832982 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:11:46.833004 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:11:46.833025 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:11:46.833047 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:11:46.833074 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:11:46.833096 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:11:46.833118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:11:46.833141 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:11:46.833164 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:11:46.833200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:11:46.833221 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:11:46.833244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:46.833266 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:11:46.833292 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:11:46.833313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:11:46.833336 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:11:46.833358 systemd[1]: Reached target machines.target - Containers. Aug 13 07:11:46.833380 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:11:46.833402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:11:46.833423 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:11:46.833445 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:11:46.833470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:11:46.833492 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:11:46.833514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:11:46.833531 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:11:46.833544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:11:46.833558 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:11:46.833577 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:11:46.833593 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:11:46.833609 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:11:46.833621 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:11:46.833634 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:11:46.833646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:11:46.833659 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:11:46.833671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:11:46.833684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:11:46.833696 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:11:46.833708 systemd[1]: Stopped verity-setup.service. Aug 13 07:11:46.833721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:46.833738 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:11:46.833750 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:11:46.833763 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:11:46.833776 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:11:46.833791 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:11:46.833810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:11:46.833829 kernel: loop: module loaded Aug 13 07:11:46.833848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:11:46.833867 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:11:46.833887 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:11:46.833906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:11:46.833931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:11:46.833949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:11:46.833970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:11:46.833989 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:11:46.834002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:11:46.834014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:11:46.834080 systemd-journald[1100]: Collecting audit messages is disabled. Aug 13 07:11:46.834124 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:11:46.834137 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:11:46.834153 systemd-journald[1100]: Journal started Aug 13 07:11:46.834209 systemd-journald[1100]: Runtime Journal (/run/log/journal/841643b469a34b4a82a92c2c8b63e790) is 4.9M, max 39.3M, 34.4M free. Aug 13 07:11:46.542429 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:11:46.565156 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:11:46.565745 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:11:46.837283 kernel: fuse: init (API version 7.39) Aug 13 07:11:46.849223 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:11:46.854254 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:11:46.865256 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:11:46.870201 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:11:46.872073 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:11:46.878858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:11:46.879813 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:11:46.880496 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:11:46.935360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:11:46.938578 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:11:46.938639 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:11:46.942456 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:11:46.944237 kernel: ACPI: bus type drm_connector registered Aug 13 07:11:46.949111 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:11:46.960569 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:11:46.961132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:11:46.964121 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:11:46.971422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:11:46.972277 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:11:46.979347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:11:46.981600 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:11:46.985898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:11:46.986752 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:11:46.986940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:11:46.988522 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:11:46.989205 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:11:46.990666 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:11:47.008430 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:11:47.024367 systemd-journald[1100]: Time spent on flushing to /var/log/journal/841643b469a34b4a82a92c2c8b63e790 is 59.139ms for 989 entries. Aug 13 07:11:47.024367 systemd-journald[1100]: System Journal (/var/log/journal/841643b469a34b4a82a92c2c8b63e790) is 8.0M, max 195.6M, 187.6M free. Aug 13 07:11:47.129513 systemd-journald[1100]: Received client request to flush runtime journal. Aug 13 07:11:47.129625 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:11:47.129664 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:11:47.050250 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:11:47.051549 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:11:47.065264 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:11:47.137675 kernel: loop1: detected capacity change from 0 to 8 Aug 13 07:11:47.138631 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:11:47.151769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:11:47.154684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:11:47.155526 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:11:47.162519 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:11:47.169443 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:11:47.205415 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:11:47.217427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:11:47.237795 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 07:11:47.236038 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:11:47.302896 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:11:47.304385 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Aug 13 07:11:47.304418 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Aug 13 07:11:47.331184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:11:47.337241 kernel: loop5: detected capacity change from 0 to 8 Aug 13 07:11:47.342860 kernel: loop6: detected capacity change from 0 to 142488 Aug 13 07:11:47.363225 kernel: loop7: detected capacity change from 0 to 221472 Aug 13 07:11:47.381101 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 07:11:47.381806 (sd-merge)[1175]: Merged extensions into '/usr'. Aug 13 07:11:47.390920 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:11:47.390944 systemd[1]: Reloading... Aug 13 07:11:47.598953 zram_generator::config[1202]: No configuration found. Aug 13 07:11:47.683937 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:11:47.838162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:11:47.929870 systemd[1]: Reloading finished in 537 ms. Aug 13 07:11:47.961795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:11:47.968773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:11:47.979396 systemd[1]: Starting ensure-sysext.service... Aug 13 07:11:47.984029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:11:48.005362 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:11:48.005392 systemd[1]: Reloading... Aug 13 07:11:48.062967 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:11:48.065823 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:11:48.070422 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:11:48.070915 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 07:11:48.071021 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 07:11:48.081116 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:11:48.082481 systemd-tmpfiles[1246]: Skipping /boot Aug 13 07:11:48.120853 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:11:48.123429 systemd-tmpfiles[1246]: Skipping /boot Aug 13 07:11:48.193466 zram_generator::config[1275]: No configuration found. Aug 13 07:11:48.351732 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:11:48.408500 systemd[1]: Reloading finished in 402 ms. Aug 13 07:11:48.425879 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:11:48.431808 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:11:48.443599 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:11:48.447442 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:11:48.449671 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:11:48.465928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:11:48.470438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:11:48.473426 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:11:48.487608 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:11:48.492084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.492346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:11:48.504589 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:11:48.507502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:11:48.511651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:11:48.512289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:11:48.512444 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.516612 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.516805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:11:48.516978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:11:48.517062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.522648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.522875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:11:48.530542 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:11:48.531501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:11:48.531665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.535259 systemd[1]: Finished ensure-sysext.service. Aug 13 07:11:48.547319 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:11:48.561011 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:11:48.583288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:11:48.584649 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:11:48.594740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:11:48.595020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:11:48.604816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:11:48.617521 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:11:48.630882 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:11:48.631100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:11:48.631819 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:11:48.633575 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:11:48.634153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:11:48.636245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:11:48.636441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:11:48.638114 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:11:48.639129 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Aug 13 07:11:48.651899 augenrules[1354]: No rules Aug 13 07:11:48.653403 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:11:48.658799 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:11:48.671366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:11:48.679404 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:11:48.683458 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:11:48.826615 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:11:48.827410 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:11:48.829755 systemd-networkd[1362]: lo: Link UP Aug 13 07:11:48.829767 systemd-networkd[1362]: lo: Gained carrier Aug 13 07:11:48.831514 systemd-networkd[1362]: Enumeration completed Aug 13 07:11:48.831682 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:11:48.835067 systemd-timesyncd[1337]: No network connectivity, watching for changes. Aug 13 07:11:48.837607 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:11:48.884475 systemd-resolved[1321]: Positive Trust Anchors: Aug 13 07:11:48.884491 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:11:48.884528 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:11:48.894320 systemd-resolved[1321]: Using system hostname 'ci-4081.3.5-a-2a2ab8bcea'. Aug 13 07:11:48.898286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:11:48.899153 systemd[1]: Reached target network.target - Network. Aug 13 07:11:48.900261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:11:48.933614 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 07:11:48.934025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.934477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:11:48.942212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Aug 13 07:11:48.943515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:11:48.952534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:11:48.962802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:11:48.971133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:11:48.971207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:11:48.971224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:11:48.992212 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 07:11:48.993280 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 07:11:48.997431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:11:48.998280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:11:49.012012 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:11:49.012306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:11:49.013433 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 07:11:49.013551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:11:49.015752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:11:49.015990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:11:49.019920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:11:49.024031 systemd-networkd[1362]: eth0: Configuring with /run/systemd/network/10-ae:e0:e6:b0:44:fd.network. Aug 13 07:11:49.024663 systemd-networkd[1362]: eth0: Link UP Aug 13 07:11:49.024671 systemd-networkd[1362]: eth0: Gained carrier Aug 13 07:11:49.032575 systemd-networkd[1362]: eth1: Configuring with /run/systemd/network/10-76:b7:b7:0d:ae:84.network. Aug 13 07:11:49.033481 systemd-networkd[1362]: eth1: Link UP Aug 13 07:11:49.033491 systemd-networkd[1362]: eth1: Gained carrier Aug 13 07:11:49.036589 systemd-timesyncd[1337]: Network configuration changed, trying to establish connection. Aug 13 07:11:49.059457 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 07:11:49.070210 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:11:49.073232 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 07:11:49.104209 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 07:11:49.132847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:11:49.142249 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:11:49.179042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:11:49.197332 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 07:11:49.203490 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 07:11:49.210232 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:11:49.215477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:11:49.221704 kernel: Console: switching to colour dummy device 80x25 Aug 13 07:11:49.223303 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 07:11:49.223372 kernel: [drm] features: -context_init Aug 13 07:11:49.232482 kernel: [drm] number of scanouts: 1 Aug 13 07:11:49.232589 kernel: [drm] number of cap sets: 0 Aug 13 07:11:49.263208 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Aug 13 07:11:49.268450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:11:49.270346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:49.283070 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Aug 13 07:11:49.283142 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 07:11:49.283023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:11:49.292240 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 07:11:49.302674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:11:49.302880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:49.321821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:11:49.358370 kernel: EDAC MC: Ver: 3.0.0 Aug 13 07:11:49.380637 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:11:49.387482 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:11:49.405278 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:11:49.414058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:11:49.441501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:11:49.442697 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:11:49.442843 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:11:49.443029 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:11:49.443140 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:11:49.443823 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:11:49.444152 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:11:49.444465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:11:49.444566 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:11:49.444593 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:11:49.444812 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:11:49.446489 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:11:49.448686 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:11:49.456842 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:11:49.458884 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:11:49.461241 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:11:49.462941 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:11:49.464052 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:11:49.464656 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:11:49.464686 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:11:49.470459 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:11:49.473943 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:11:49.482065 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:11:49.490461 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:11:49.494539 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:11:49.508486 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:11:49.512904 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:11:49.517825 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:11:49.527241 jq[1437]: false Aug 13 07:11:49.525391 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:11:49.536468 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:11:49.545409 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:11:49.569431 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:11:49.572411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:11:49.575341 extend-filesystems[1438]: Found loop4 Aug 13 07:11:49.575341 extend-filesystems[1438]: Found loop5 Aug 13 07:11:49.575341 extend-filesystems[1438]: Found loop6 Aug 13 07:11:49.575341 extend-filesystems[1438]: Found loop7 Aug 13 07:11:49.626341 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 07:11:49.573054 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda1 Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda2 Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda3 Aug 13 07:11:49.626466 extend-filesystems[1438]: Found usr Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda4 Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda6 Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda7 Aug 13 07:11:49.626466 extend-filesystems[1438]: Found vda9 Aug 13 07:11:49.626466 extend-filesystems[1438]: Checking size of /dev/vda9 Aug 13 07:11:49.626466 extend-filesystems[1438]: Resized partition /dev/vda9 Aug 13 07:11:49.690645 coreos-metadata[1435]: Aug 13 07:11:49.602 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:11:49.690645 coreos-metadata[1435]: Aug 13 07:11:49.642 INFO Fetch successful Aug 13 07:11:49.581457 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:11:49.691692 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:11:49.676508 dbus-daemon[1436]: [system] SELinux support is enabled Aug 13 07:11:49.592446 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:11:49.597770 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:11:49.694411 jq[1451]: true Aug 13 07:11:49.607578 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:11:49.608105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:11:49.635821 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:11:49.636454 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:11:49.644070 systemd-timesyncd[1337]: Contacted time server 66.118.229.14:123 (1.flatcar.pool.ntp.org). Aug 13 07:11:49.644132 systemd-timesyncd[1337]: Initial clock synchronization to Wed 2025-08-13 07:11:49.826804 UTC. Aug 13 07:11:49.676815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:11:49.688274 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:11:49.688341 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:11:49.689167 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:11:49.689332 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 07:11:49.689357 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:11:49.717010 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:11:49.721625 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:11:49.756531 tar[1457]: linux-amd64/helm Aug 13 07:11:49.759656 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:11:49.772072 update_engine[1447]: I20250813 07:11:49.771643 1447 main.cc:92] Flatcar Update Engine starting Aug 13 07:11:49.784844 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:11:49.789481 update_engine[1447]: I20250813 07:11:49.789067 1447 update_check_scheduler.cc:74] Next update check in 8m44s Aug 13 07:11:49.798623 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1365) Aug 13 07:11:49.800193 jq[1471]: true Aug 13 07:11:49.808732 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:11:49.821456 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:11:49.826499 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:11:49.833937 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:11:49.854219 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:11:49.854219 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:11:49.854219 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:11:49.861065 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Aug 13 07:11:49.861065 extend-filesystems[1438]: Found vdb Aug 13 07:11:49.860064 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:11:49.861408 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:11:49.915581 systemd-logind[1446]: New seat seat0. Aug 13 07:11:49.919761 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 07:11:49.919797 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:11:49.920144 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:11:50.019466 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:11:50.023479 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:11:50.038584 systemd[1]: Starting sshkeys.service... Aug 13 07:11:50.051660 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:11:50.107943 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:11:50.121958 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:11:50.179329 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:11:50.185195 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:11:50.197829 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:11:50.219994 coreos-metadata[1514]: Aug 13 07:11:50.219 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 07:11:50.234285 coreos-metadata[1514]: Aug 13 07:11:50.232 INFO Fetch successful Aug 13 07:11:50.239971 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:11:50.243069 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:11:50.256838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:11:50.271499 unknown[1514]: wrote ssh authorized keys file for user: core Aug 13 07:11:50.322004 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:11:50.323543 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:11:50.330435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:11:50.334022 systemd[1]: Finished sshkeys.service. Aug 13 07:11:50.352687 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:11:50.363966 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:11:50.367784 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:11:50.444437 containerd[1469]: time="2025-08-13T07:11:50.444274450Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:11:50.451352 systemd-networkd[1362]: eth0: Gained IPv6LL Aug 13 07:11:50.457466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:11:50.462030 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:11:50.472395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:11:50.486616 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:11:50.514042 containerd[1469]: time="2025-08-13T07:11:50.513947051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.516828 containerd[1469]: time="2025-08-13T07:11:50.516756976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:11:50.516828 containerd[1469]: time="2025-08-13T07:11:50.516820309Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:11:50.516956 containerd[1469]: time="2025-08-13T07:11:50.516847122Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:11:50.517080 containerd[1469]: time="2025-08-13T07:11:50.517058772Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:11:50.517106 containerd[1469]: time="2025-08-13T07:11:50.517090185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517186 containerd[1469]: time="2025-08-13T07:11:50.517166039Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517232 containerd[1469]: time="2025-08-13T07:11:50.517190710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517756 containerd[1469]: time="2025-08-13T07:11:50.517720556Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517806 containerd[1469]: time="2025-08-13T07:11:50.517756570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517806 containerd[1469]: time="2025-08-13T07:11:50.517781652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517806 containerd[1469]: time="2025-08-13T07:11:50.517797926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.517963 containerd[1469]: time="2025-08-13T07:11:50.517940678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.518299 containerd[1469]: time="2025-08-13T07:11:50.518274220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:11:50.518514 containerd[1469]: time="2025-08-13T07:11:50.518486736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:11:50.518566 containerd[1469]: time="2025-08-13T07:11:50.518516225Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:11:50.518660 containerd[1469]: time="2025-08-13T07:11:50.518640932Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:11:50.518723 containerd[1469]: time="2025-08-13T07:11:50.518707152Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:11:50.522334 containerd[1469]: time="2025-08-13T07:11:50.522238652Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:11:50.522334 containerd[1469]: time="2025-08-13T07:11:50.522325187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:11:50.522453 containerd[1469]: time="2025-08-13T07:11:50.522352215Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:11:50.522453 containerd[1469]: time="2025-08-13T07:11:50.522407144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:11:50.522453 containerd[1469]: time="2025-08-13T07:11:50.522424365Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:11:50.523071 containerd[1469]: time="2025-08-13T07:11:50.522587015Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:11:50.523071 containerd[1469]: time="2025-08-13T07:11:50.522902749Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:11:50.523071 containerd[1469]: time="2025-08-13T07:11:50.523043806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:11:50.523071 containerd[1469]: time="2025-08-13T07:11:50.523060296Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:11:50.523158 containerd[1469]: time="2025-08-13T07:11:50.523076983Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:11:50.523158 containerd[1469]: time="2025-08-13T07:11:50.523093011Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523158 containerd[1469]: time="2025-08-13T07:11:50.523106108Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523158 containerd[1469]: time="2025-08-13T07:11:50.523118588Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523158 containerd[1469]: time="2025-08-13T07:11:50.523133682Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523158 containerd[1469]: time="2025-08-13T07:11:50.523149265Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523315 containerd[1469]: time="2025-08-13T07:11:50.523163621Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523315 containerd[1469]: time="2025-08-13T07:11:50.523176162Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523315 containerd[1469]: time="2025-08-13T07:11:50.523202996Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:11:50.523315 containerd[1469]: time="2025-08-13T07:11:50.523276372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523315 containerd[1469]: time="2025-08-13T07:11:50.523296715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523318210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523335131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523351139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523378381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523393257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523406396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523427 containerd[1469]: time="2025-08-13T07:11:50.523418935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523434131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523445906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523457602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523470047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523494328Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523518930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523531385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523542360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:11:50.523603 containerd[1469]: time="2025-08-13T07:11:50.523597873Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523618156Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523630070Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523703567Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523715644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523728815Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523738609Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:11:50.523829 containerd[1469]: time="2025-08-13T07:11:50.523748948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.524050840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.524376672Z" level=info msg="Connect containerd service" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.524435187Z" level=info msg="using legacy CRI server" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.524445103Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.524597398Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.525930985Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526066546Z" level=info msg="Start subscribing containerd event" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526117982Z" level=info msg="Start recovering state" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526209858Z" level=info msg="Start event monitor" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526227192Z" level=info msg="Start snapshots syncer" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526237075Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526244396Z" level=info msg="Start streaming server" Aug 13 07:11:50.526834 containerd[1469]: time="2025-08-13T07:11:50.526832882Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:11:50.528585 containerd[1469]: time="2025-08-13T07:11:50.527248431Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:11:50.527439 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:11:50.536687 containerd[1469]: time="2025-08-13T07:11:50.532073055Z" level=info msg="containerd successfully booted in 0.088724s" Aug 13 07:11:50.560486 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:11:50.809313 tar[1457]: linux-amd64/LICENSE Aug 13 07:11:50.810040 tar[1457]: linux-amd64/README.md Aug 13 07:11:50.829403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:11:50.836767 systemd-networkd[1362]: eth1: Gained IPv6LL Aug 13 07:11:51.702564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:11:51.705328 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:11:51.709874 systemd[1]: Startup finished in 1.172s (kernel) + 5.185s (initrd) + 5.850s (userspace) = 12.208s. Aug 13 07:11:51.713818 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:11:52.398671 kubelet[1559]: E0813 07:11:52.398582 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:11:52.400858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:11:52.401053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:11:52.401781 systemd[1]: kubelet.service: Consumed 1.369s CPU time. Aug 13 07:11:54.423418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:11:54.433684 systemd[1]: Started sshd@0-137.184.36.62:22-139.178.89.65:47054.service - OpenSSH per-connection server daemon (139.178.89.65:47054). Aug 13 07:11:54.503553 sshd[1571]: Accepted publickey for core from 139.178.89.65 port 47054 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:54.506249 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:54.516887 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:11:54.524588 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:11:54.528219 systemd-logind[1446]: New session 1 of user core. Aug 13 07:11:54.543920 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:11:54.550584 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:11:54.565430 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:11:54.683694 systemd[1575]: Queued start job for default target default.target. Aug 13 07:11:54.694987 systemd[1575]: Created slice app.slice - User Application Slice. Aug 13 07:11:54.695039 systemd[1575]: Reached target paths.target - Paths. Aug 13 07:11:54.695060 systemd[1575]: Reached target timers.target - Timers. Aug 13 07:11:54.697099 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:11:54.713496 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:11:54.713664 systemd[1575]: Reached target sockets.target - Sockets. Aug 13 07:11:54.713681 systemd[1575]: Reached target basic.target - Basic System. Aug 13 07:11:54.713739 systemd[1575]: Reached target default.target - Main User Target. Aug 13 07:11:54.713774 systemd[1575]: Startup finished in 139ms. Aug 13 07:11:54.714122 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:11:54.719675 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:11:54.795939 systemd[1]: Started sshd@1-137.184.36.62:22-139.178.89.65:47062.service - OpenSSH per-connection server daemon (139.178.89.65:47062). Aug 13 07:11:54.840855 sshd[1586]: Accepted publickey for core from 139.178.89.65 port 47062 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:54.840645 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:54.846624 systemd-logind[1446]: New session 2 of user core. Aug 13 07:11:54.855483 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:11:54.918936 sshd[1586]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:54.930758 systemd[1]: sshd@1-137.184.36.62:22-139.178.89.65:47062.service: Deactivated successfully. Aug 13 07:11:54.933074 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:11:54.935538 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:11:54.940577 systemd[1]: Started sshd@2-137.184.36.62:22-139.178.89.65:47074.service - OpenSSH per-connection server daemon (139.178.89.65:47074). Aug 13 07:11:54.942132 systemd-logind[1446]: Removed session 2. Aug 13 07:11:54.997060 sshd[1593]: Accepted publickey for core from 139.178.89.65 port 47074 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:54.999228 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:55.005242 systemd-logind[1446]: New session 3 of user core. Aug 13 07:11:55.016565 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:11:55.075156 sshd[1593]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:55.094333 systemd[1]: sshd@2-137.184.36.62:22-139.178.89.65:47074.service: Deactivated successfully. Aug 13 07:11:55.097062 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:11:55.100384 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:11:55.104639 systemd[1]: Started sshd@3-137.184.36.62:22-139.178.89.65:47076.service - OpenSSH per-connection server daemon (139.178.89.65:47076). Aug 13 07:11:55.106080 systemd-logind[1446]: Removed session 3. Aug 13 07:11:55.161033 sshd[1600]: Accepted publickey for core from 139.178.89.65 port 47076 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:55.162886 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:55.168300 systemd-logind[1446]: New session 4 of user core. Aug 13 07:11:55.178471 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:11:55.240416 sshd[1600]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:55.252324 systemd[1]: sshd@3-137.184.36.62:22-139.178.89.65:47076.service: Deactivated successfully. Aug 13 07:11:55.254848 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:11:55.258420 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:11:55.265652 systemd[1]: Started sshd@4-137.184.36.62:22-139.178.89.65:47090.service - OpenSSH per-connection server daemon (139.178.89.65:47090). Aug 13 07:11:55.268274 systemd-logind[1446]: Removed session 4. Aug 13 07:11:55.311890 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 47090 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:55.314209 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:55.320836 systemd-logind[1446]: New session 5 of user core. Aug 13 07:11:55.332530 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:11:55.405586 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:11:55.406018 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:55.419664 sudo[1610]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:55.425586 sshd[1607]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:55.439272 systemd[1]: sshd@4-137.184.36.62:22-139.178.89.65:47090.service: Deactivated successfully. Aug 13 07:11:55.442037 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:11:55.444409 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:11:55.455842 systemd[1]: Started sshd@5-137.184.36.62:22-139.178.89.65:47102.service - OpenSSH per-connection server daemon (139.178.89.65:47102). Aug 13 07:11:55.458575 systemd-logind[1446]: Removed session 5. Aug 13 07:11:55.500529 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 47102 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:55.502472 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:55.508462 systemd-logind[1446]: New session 6 of user core. Aug 13 07:11:55.516486 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:11:55.577528 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:11:55.577921 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:55.582589 sudo[1619]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:55.591012 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:11:55.591952 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:55.615611 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:11:55.617781 auditctl[1622]: No rules Aug 13 07:11:55.619302 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:11:55.619535 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:11:55.623760 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:11:55.667253 augenrules[1640]: No rules Aug 13 07:11:55.669606 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:11:55.671701 sudo[1618]: pam_unix(sudo:session): session closed for user root Aug 13 07:11:55.675165 sshd[1615]: pam_unix(sshd:session): session closed for user core Aug 13 07:11:55.682818 systemd[1]: sshd@5-137.184.36.62:22-139.178.89.65:47102.service: Deactivated successfully. Aug 13 07:11:55.684756 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:11:55.687395 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:11:55.692134 systemd[1]: Started sshd@6-137.184.36.62:22-139.178.89.65:47108.service - OpenSSH per-connection server daemon (139.178.89.65:47108). Aug 13 07:11:55.694403 systemd-logind[1446]: Removed session 6. Aug 13 07:11:55.735140 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 47108 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:11:55.737459 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:11:55.743394 systemd-logind[1446]: New session 7 of user core. Aug 13 07:11:55.750486 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:11:55.811713 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:11:55.812130 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:11:56.256010 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:11:56.259055 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:11:56.701414 dockerd[1668]: time="2025-08-13T07:11:56.700064022Z" level=info msg="Starting up" Aug 13 07:11:56.877348 dockerd[1668]: time="2025-08-13T07:11:56.877292940Z" level=info msg="Loading containers: start." Aug 13 07:11:57.020218 kernel: Initializing XFRM netlink socket Aug 13 07:11:57.118856 systemd-networkd[1362]: docker0: Link UP Aug 13 07:11:57.133320 dockerd[1668]: time="2025-08-13T07:11:57.133170342Z" level=info msg="Loading containers: done." Aug 13 07:11:57.150651 dockerd[1668]: time="2025-08-13T07:11:57.150488135Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:11:57.151395 dockerd[1668]: time="2025-08-13T07:11:57.150967030Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:11:57.151395 dockerd[1668]: time="2025-08-13T07:11:57.151103354Z" level=info msg="Daemon has completed initialization" Aug 13 07:11:57.152173 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4262527785-merged.mount: Deactivated successfully. Aug 13 07:11:57.189434 dockerd[1668]: time="2025-08-13T07:11:57.188844110Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:11:57.189673 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:11:58.051708 containerd[1469]: time="2025-08-13T07:11:58.051644618Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:11:58.651296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869296635.mount: Deactivated successfully. Aug 13 07:11:59.763233 containerd[1469]: time="2025-08-13T07:11:59.763150959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:59.764553 containerd[1469]: time="2025-08-13T07:11:59.764483731Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 07:11:59.765218 containerd[1469]: time="2025-08-13T07:11:59.764822869Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:59.769992 containerd[1469]: time="2025-08-13T07:11:59.768269811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:11:59.769992 containerd[1469]: time="2025-08-13T07:11:59.769410800Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.717704461s" Aug 13 07:11:59.769992 containerd[1469]: time="2025-08-13T07:11:59.769463113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 07:11:59.770403 containerd[1469]: time="2025-08-13T07:11:59.770370178Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:12:01.371097 containerd[1469]: time="2025-08-13T07:12:01.371019511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:01.372492 containerd[1469]: time="2025-08-13T07:12:01.372425553Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 07:12:01.372492 containerd[1469]: time="2025-08-13T07:12:01.372560175Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:01.377592 containerd[1469]: time="2025-08-13T07:12:01.377519379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:01.379559 containerd[1469]: time="2025-08-13T07:12:01.379490014Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.609079358s" Aug 13 07:12:01.379559 containerd[1469]: time="2025-08-13T07:12:01.379561954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 07:12:01.380263 containerd[1469]: time="2025-08-13T07:12:01.380215110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:12:02.606456 containerd[1469]: time="2025-08-13T07:12:02.604927760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:02.606456 containerd[1469]: time="2025-08-13T07:12:02.605783153Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 07:12:02.607606 containerd[1469]: time="2025-08-13T07:12:02.607560505Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:02.614291 containerd[1469]: time="2025-08-13T07:12:02.614234553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:02.616329 containerd[1469]: time="2025-08-13T07:12:02.616263936Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.235856897s" Aug 13 07:12:02.616533 containerd[1469]: time="2025-08-13T07:12:02.616510521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 07:12:02.617339 containerd[1469]: time="2025-08-13T07:12:02.617305785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:12:02.651804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:12:02.662547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:02.809369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:02.824143 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:12:02.892488 kubelet[1882]: E0813 07:12:02.891958 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:12:02.898313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:12:02.898541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:12:03.848946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671663265.mount: Deactivated successfully. Aug 13 07:12:04.411374 containerd[1469]: time="2025-08-13T07:12:04.411298273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:04.412680 containerd[1469]: time="2025-08-13T07:12:04.412548688Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 07:12:04.413009 containerd[1469]: time="2025-08-13T07:12:04.412937608Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:04.415233 containerd[1469]: time="2025-08-13T07:12:04.414830687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:04.416100 containerd[1469]: time="2025-08-13T07:12:04.415916203Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.79840735s" Aug 13 07:12:04.416100 containerd[1469]: time="2025-08-13T07:12:04.415967363Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 07:12:04.417432 containerd[1469]: time="2025-08-13T07:12:04.417396356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:12:04.419410 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 07:12:04.970488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142741145.mount: Deactivated successfully. Aug 13 07:12:05.967021 containerd[1469]: time="2025-08-13T07:12:05.966942485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:05.968669 containerd[1469]: time="2025-08-13T07:12:05.968585179Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 07:12:05.970380 containerd[1469]: time="2025-08-13T07:12:05.970279383Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:05.972853 containerd[1469]: time="2025-08-13T07:12:05.972757779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:05.975121 containerd[1469]: time="2025-08-13T07:12:05.974692111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.557249234s" Aug 13 07:12:05.975121 containerd[1469]: time="2025-08-13T07:12:05.974754005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:12:05.975948 containerd[1469]: time="2025-08-13T07:12:05.975910069Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:12:06.473787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214461230.mount: Deactivated successfully. Aug 13 07:12:06.479467 containerd[1469]: time="2025-08-13T07:12:06.478699814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:06.480460 containerd[1469]: time="2025-08-13T07:12:06.480391863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 07:12:06.503621 containerd[1469]: time="2025-08-13T07:12:06.503567295Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:06.507233 containerd[1469]: time="2025-08-13T07:12:06.506709640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:06.507866 containerd[1469]: time="2025-08-13T07:12:06.507828678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.737828ms" Aug 13 07:12:06.508007 containerd[1469]: time="2025-08-13T07:12:06.507988840Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:12:06.508786 containerd[1469]: time="2025-08-13T07:12:06.508633650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:12:07.019132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757599813.mount: Deactivated successfully. Aug 13 07:12:07.475486 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 07:12:08.932902 containerd[1469]: time="2025-08-13T07:12:08.932835328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:08.934966 containerd[1469]: time="2025-08-13T07:12:08.933947983Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 07:12:08.934966 containerd[1469]: time="2025-08-13T07:12:08.934485914Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:08.937723 containerd[1469]: time="2025-08-13T07:12:08.937670894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:08.939247 containerd[1469]: time="2025-08-13T07:12:08.939205545Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.430539559s" Aug 13 07:12:08.939247 containerd[1469]: time="2025-08-13T07:12:08.939247663Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:12:12.291923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:12.298729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:12.341782 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit session-7.scope)... Aug 13 07:12:12.342038 systemd[1]: Reloading... Aug 13 07:12:12.503276 zram_generator::config[2070]: No configuration found. Aug 13 07:12:12.654318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:12:12.757864 systemd[1]: Reloading finished in 414 ms. Aug 13 07:12:12.828267 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:12:12.828396 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:12:12.828762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:12.835771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:13.010205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:13.022678 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:12:13.086282 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:13.086282 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:12:13.086282 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:13.086905 kubelet[2127]: I0813 07:12:13.086372 2127 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:12:13.383634 kubelet[2127]: I0813 07:12:13.383457 2127 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:12:13.383634 kubelet[2127]: I0813 07:12:13.383518 2127 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:12:13.384305 kubelet[2127]: I0813 07:12:13.383919 2127 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:12:13.422247 kubelet[2127]: E0813 07:12:13.422203 2127 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://137.184.36.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:13.424643 kubelet[2127]: I0813 07:12:13.424448 2127 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:12:13.431653 kubelet[2127]: E0813 07:12:13.431484 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:12:13.431653 kubelet[2127]: I0813 07:12:13.431519 2127 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:12:13.437024 kubelet[2127]: I0813 07:12:13.436979 2127 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:12:13.437303 kubelet[2127]: I0813 07:12:13.437168 2127 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:12:13.437404 kubelet[2127]: I0813 07:12:13.437360 2127 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:12:13.437679 kubelet[2127]: I0813 07:12:13.437412 2127 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-2a2ab8bcea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:12:13.437827 kubelet[2127]: I0813 07:12:13.437691 2127 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:12:13.437827 kubelet[2127]: I0813 07:12:13.437708 2127 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:12:13.437947 kubelet[2127]: I0813 07:12:13.437861 2127 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:13.443337 kubelet[2127]: I0813 07:12:13.442949 2127 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:12:13.443337 kubelet[2127]: I0813 07:12:13.443006 2127 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:12:13.443337 kubelet[2127]: I0813 07:12:13.443051 2127 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:12:13.443337 kubelet[2127]: I0813 07:12:13.443074 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:12:13.447543 kubelet[2127]: I0813 07:12:13.447368 2127 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:12:13.452234 kubelet[2127]: I0813 07:12:13.451113 2127 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:12:13.452234 kubelet[2127]: W0813 07:12:13.451208 2127 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:12:13.452234 kubelet[2127]: I0813 07:12:13.451833 2127 server.go:1274] "Started kubelet" Aug 13 07:12:13.452234 kubelet[2127]: W0813 07:12:13.452012 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.36.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2a2ab8bcea&limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:13.452234 kubelet[2127]: E0813 07:12:13.452070 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.36.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2a2ab8bcea&limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:13.455875 kubelet[2127]: W0813 07:12:13.455808 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.36.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:13.455875 kubelet[2127]: E0813 07:12:13.455885 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.36.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:13.456236 kubelet[2127]: I0813 07:12:13.455962 2127 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:12:13.460062 kubelet[2127]: I0813 07:12:13.459115 2127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:12:13.460062 kubelet[2127]: I0813 07:12:13.459729 2127 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:12:13.460628 kubelet[2127]: I0813 07:12:13.460592 2127 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:12:13.466213 kubelet[2127]: I0813 07:12:13.465851 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:12:13.477465 kubelet[2127]: E0813 07:12:13.476414 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2a2ab8bcea\" not found" Aug 13 07:12:13.477465 kubelet[2127]: I0813 07:12:13.476888 2127 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:12:13.482222 kubelet[2127]: I0813 07:12:13.481446 2127 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:12:13.486708 kubelet[2127]: I0813 07:12:13.486490 2127 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:12:13.488140 kubelet[2127]: E0813 07:12:13.480272 2127 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.36.62:6443/api/v1/namespaces/default/events\": dial tcp 137.184.36.62:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-2a2ab8bcea.185b4213ffa5b9de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-2a2ab8bcea,UID:ci-4081.3.5-a-2a2ab8bcea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-2a2ab8bcea,},FirstTimestamp:2025-08-13 07:12:13.451803102 +0000 UTC m=+0.425175702,LastTimestamp:2025-08-13 07:12:13.451803102 +0000 UTC m=+0.425175702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-2a2ab8bcea,}" Aug 13 07:12:13.488140 kubelet[2127]: I0813 07:12:13.487916 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:12:13.489659 kubelet[2127]: W0813 07:12:13.489595 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.36.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:13.489659 kubelet[2127]: E0813 07:12:13.489663 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.36.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:13.489813 kubelet[2127]: E0813 07:12:13.489729 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.36.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2a2ab8bcea?timeout=10s\": dial tcp 137.184.36.62:6443: connect: connection refused" interval="200ms" Aug 13 07:12:13.492212 kubelet[2127]: I0813 07:12:13.491754 2127 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:12:13.492212 kubelet[2127]: I0813 07:12:13.491855 2127 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:12:13.494502 kubelet[2127]: E0813 07:12:13.494444 2127 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:12:13.495110 kubelet[2127]: I0813 07:12:13.494786 2127 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:12:13.499640 kubelet[2127]: I0813 07:12:13.499566 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:12:13.502217 kubelet[2127]: I0813 07:12:13.501871 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:12:13.502217 kubelet[2127]: I0813 07:12:13.501913 2127 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:12:13.502217 kubelet[2127]: I0813 07:12:13.501945 2127 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:12:13.502217 kubelet[2127]: E0813 07:12:13.502016 2127 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:12:13.515434 kubelet[2127]: W0813 07:12:13.515371 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.36.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:13.515845 kubelet[2127]: E0813 07:12:13.515611 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.36.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:13.524789 kubelet[2127]: I0813 07:12:13.524749 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:12:13.524789 kubelet[2127]: I0813 07:12:13.524778 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:12:13.525050 kubelet[2127]: I0813 07:12:13.524815 2127 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:13.527169 kubelet[2127]: I0813 07:12:13.527137 2127 policy_none.go:49] "None policy: Start" Aug 13 07:12:13.527864 kubelet[2127]: I0813 07:12:13.527846 2127 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:12:13.528115 kubelet[2127]: I0813 07:12:13.527988 2127 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:12:13.536733 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:12:13.550938 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:12:13.555640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:12:13.564817 kubelet[2127]: I0813 07:12:13.564753 2127 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:12:13.565374 kubelet[2127]: I0813 07:12:13.565349 2127 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:12:13.565458 kubelet[2127]: I0813 07:12:13.565377 2127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:12:13.566194 kubelet[2127]: I0813 07:12:13.565783 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:12:13.571427 kubelet[2127]: E0813 07:12:13.571294 2127 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-a-2a2ab8bcea\" not found" Aug 13 07:12:13.615349 systemd[1]: Created slice kubepods-burstable-podaf150af67bc28a64ab21aa0d4370bbe5.slice - libcontainer container kubepods-burstable-podaf150af67bc28a64ab21aa0d4370bbe5.slice. Aug 13 07:12:13.636696 systemd[1]: Created slice kubepods-burstable-pod1ae69af200daa3220bd2bacc1d97c467.slice - libcontainer container kubepods-burstable-pod1ae69af200daa3220bd2bacc1d97c467.slice. Aug 13 07:12:13.644823 systemd[1]: Created slice kubepods-burstable-pod859c8e70ee6283937bddb2a8bf5c9575.slice - libcontainer container kubepods-burstable-pod859c8e70ee6283937bddb2a8bf5c9575.slice. Aug 13 07:12:13.667612 kubelet[2127]: I0813 07:12:13.667579 2127 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.668227 kubelet[2127]: E0813 07:12:13.668187 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.36.62:6443/api/v1/nodes\": dial tcp 137.184.36.62:6443: connect: connection refused" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688050 kubelet[2127]: I0813 07:12:13.687654 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688050 kubelet[2127]: I0813 07:12:13.687728 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/859c8e70ee6283937bddb2a8bf5c9575-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"859c8e70ee6283937bddb2a8bf5c9575\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688050 kubelet[2127]: I0813 07:12:13.687754 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af150af67bc28a64ab21aa0d4370bbe5-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"af150af67bc28a64ab21aa0d4370bbe5\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688050 kubelet[2127]: I0813 07:12:13.687774 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af150af67bc28a64ab21aa0d4370bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"af150af67bc28a64ab21aa0d4370bbe5\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688050 kubelet[2127]: I0813 07:12:13.687802 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688369 kubelet[2127]: I0813 07:12:13.687829 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688369 kubelet[2127]: I0813 07:12:13.687854 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af150af67bc28a64ab21aa0d4370bbe5-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"af150af67bc28a64ab21aa0d4370bbe5\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688369 kubelet[2127]: I0813 07:12:13.687911 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.688369 kubelet[2127]: I0813 07:12:13.687939 2127 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.691380 kubelet[2127]: E0813 07:12:13.691311 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.36.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2a2ab8bcea?timeout=10s\": dial tcp 137.184.36.62:6443: connect: connection refused" interval="400ms" Aug 13 07:12:13.870512 kubelet[2127]: I0813 07:12:13.870466 2127 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.871088 kubelet[2127]: E0813 07:12:13.871053 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.36.62:6443/api/v1/nodes\": dial tcp 137.184.36.62:6443: connect: connection refused" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:13.938079 kubelet[2127]: E0813 07:12:13.937435 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:13.938721 containerd[1469]: time="2025-08-13T07:12:13.938610299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-2a2ab8bcea,Uid:af150af67bc28a64ab21aa0d4370bbe5,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:13.940443 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Aug 13 07:12:13.943635 kubelet[2127]: E0813 07:12:13.943599 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:13.950750 kubelet[2127]: E0813 07:12:13.949602 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:13.950907 containerd[1469]: time="2025-08-13T07:12:13.950854978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-2a2ab8bcea,Uid:859c8e70ee6283937bddb2a8bf5c9575,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:13.951082 containerd[1469]: time="2025-08-13T07:12:13.951054242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea,Uid:1ae69af200daa3220bd2bacc1d97c467,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:14.092350 kubelet[2127]: E0813 07:12:14.092299 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.36.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2a2ab8bcea?timeout=10s\": dial tcp 137.184.36.62:6443: connect: connection refused" interval="800ms" Aug 13 07:12:14.274430 kubelet[2127]: I0813 07:12:14.273699 2127 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:14.274430 kubelet[2127]: E0813 07:12:14.274252 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.36.62:6443/api/v1/nodes\": dial tcp 137.184.36.62:6443: connect: connection refused" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:14.379393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872882306.mount: Deactivated successfully. Aug 13 07:12:14.383771 containerd[1469]: time="2025-08-13T07:12:14.383679787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:14.386140 containerd[1469]: time="2025-08-13T07:12:14.386074337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:12:14.387223 containerd[1469]: time="2025-08-13T07:12:14.386338079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:12:14.387223 containerd[1469]: time="2025-08-13T07:12:14.386527040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 07:12:14.387223 containerd[1469]: time="2025-08-13T07:12:14.386709407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:14.392326 containerd[1469]: time="2025-08-13T07:12:14.392128274Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:14.393641 containerd[1469]: time="2025-08-13T07:12:14.393282939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.069611ms" Aug 13 07:12:14.396242 containerd[1469]: time="2025-08-13T07:12:14.395680122Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.73415ms" Aug 13 07:12:14.398121 containerd[1469]: time="2025-08-13T07:12:14.397636661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:14.401212 containerd[1469]: time="2025-08-13T07:12:14.399332414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.621822ms" Aug 13 07:12:14.401212 containerd[1469]: time="2025-08-13T07:12:14.400990662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:12:14.499208 kubelet[2127]: W0813 07:12:14.498286 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.36.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:14.499208 kubelet[2127]: E0813 07:12:14.498368 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://137.184.36.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:14.589616 containerd[1469]: time="2025-08-13T07:12:14.589053252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:14.589616 containerd[1469]: time="2025-08-13T07:12:14.589150754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:14.589616 containerd[1469]: time="2025-08-13T07:12:14.589204921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:14.589616 containerd[1469]: time="2025-08-13T07:12:14.589351883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:14.598976 containerd[1469]: time="2025-08-13T07:12:14.598499952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:14.598976 containerd[1469]: time="2025-08-13T07:12:14.598711249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:14.598976 containerd[1469]: time="2025-08-13T07:12:14.598802960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:14.602452 containerd[1469]: time="2025-08-13T07:12:14.600368919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:14.602452 containerd[1469]: time="2025-08-13T07:12:14.600448971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:14.602452 containerd[1469]: time="2025-08-13T07:12:14.600467747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:14.602452 containerd[1469]: time="2025-08-13T07:12:14.600595186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:14.606326 containerd[1469]: time="2025-08-13T07:12:14.602362914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:14.629851 kubelet[2127]: W0813 07:12:14.629195 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.36.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:14.630275 kubelet[2127]: E0813 07:12:14.630158 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://137.184.36.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:14.653483 systemd[1]: Started cri-containerd-58046e515478326ecf599fd1c991500ddf59dc554e80520f0defc013c6ade7ac.scope - libcontainer container 58046e515478326ecf599fd1c991500ddf59dc554e80520f0defc013c6ade7ac. Aug 13 07:12:14.670216 kubelet[2127]: W0813 07:12:14.669010 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.36.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2a2ab8bcea&limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:14.670216 kubelet[2127]: E0813 07:12:14.669412 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://137.184.36.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-2a2ab8bcea&limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:14.673509 systemd[1]: Started cri-containerd-1b67a025de2129dc866b4a3ddbefdabc7e2b4ba64e986bbfb1f6d0d8534973f9.scope - libcontainer container 1b67a025de2129dc866b4a3ddbefdabc7e2b4ba64e986bbfb1f6d0d8534973f9. Aug 13 07:12:14.688509 systemd[1]: Started cri-containerd-4cb2a2f0a555e7cb4269e4aa48a7fdb44a5c90f1adb822e09cd1e7d24de6e8d1.scope - libcontainer container 4cb2a2f0a555e7cb4269e4aa48a7fdb44a5c90f1adb822e09cd1e7d24de6e8d1. Aug 13 07:12:14.766761 containerd[1469]: time="2025-08-13T07:12:14.765533582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-2a2ab8bcea,Uid:af150af67bc28a64ab21aa0d4370bbe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b67a025de2129dc866b4a3ddbefdabc7e2b4ba64e986bbfb1f6d0d8534973f9\"" Aug 13 07:12:14.780823 kubelet[2127]: E0813 07:12:14.780779 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:14.791010 containerd[1469]: time="2025-08-13T07:12:14.790713586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea,Uid:1ae69af200daa3220bd2bacc1d97c467,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cb2a2f0a555e7cb4269e4aa48a7fdb44a5c90f1adb822e09cd1e7d24de6e8d1\"" Aug 13 07:12:14.791500 kubelet[2127]: E0813 07:12:14.791459 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:14.796138 containerd[1469]: time="2025-08-13T07:12:14.795866629Z" level=info msg="CreateContainer within sandbox \"1b67a025de2129dc866b4a3ddbefdabc7e2b4ba64e986bbfb1f6d0d8534973f9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:12:14.798305 containerd[1469]: time="2025-08-13T07:12:14.797159528Z" level=info msg="CreateContainer within sandbox \"4cb2a2f0a555e7cb4269e4aa48a7fdb44a5c90f1adb822e09cd1e7d24de6e8d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:12:14.810022 containerd[1469]: time="2025-08-13T07:12:14.809892838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-2a2ab8bcea,Uid:859c8e70ee6283937bddb2a8bf5c9575,Namespace:kube-system,Attempt:0,} returns sandbox id \"58046e515478326ecf599fd1c991500ddf59dc554e80520f0defc013c6ade7ac\"" Aug 13 07:12:14.811613 kubelet[2127]: E0813 07:12:14.811571 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:14.815931 containerd[1469]: time="2025-08-13T07:12:14.815723960Z" level=info msg="CreateContainer within sandbox \"4cb2a2f0a555e7cb4269e4aa48a7fdb44a5c90f1adb822e09cd1e7d24de6e8d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"342f23fde26cbf1b7377a161417f9329dce63b81b52a5ecafc8c840ec2c6db7d\"" Aug 13 07:12:14.817580 containerd[1469]: time="2025-08-13T07:12:14.817527096Z" level=info msg="CreateContainer within sandbox \"58046e515478326ecf599fd1c991500ddf59dc554e80520f0defc013c6ade7ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:12:14.820568 containerd[1469]: time="2025-08-13T07:12:14.819378715Z" level=info msg="StartContainer for \"342f23fde26cbf1b7377a161417f9329dce63b81b52a5ecafc8c840ec2c6db7d\"" Aug 13 07:12:14.829616 containerd[1469]: time="2025-08-13T07:12:14.829553063Z" level=info msg="CreateContainer within sandbox \"1b67a025de2129dc866b4a3ddbefdabc7e2b4ba64e986bbfb1f6d0d8534973f9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c5ef6161799ccf29df1564f2b68c49fe357bdebe901b5b83dbf78532e8b9af8\"" Aug 13 07:12:14.830962 containerd[1469]: time="2025-08-13T07:12:14.830925896Z" level=info msg="StartContainer for \"4c5ef6161799ccf29df1564f2b68c49fe357bdebe901b5b83dbf78532e8b9af8\"" Aug 13 07:12:14.844588 containerd[1469]: time="2025-08-13T07:12:14.844437637Z" level=info msg="CreateContainer within sandbox \"58046e515478326ecf599fd1c991500ddf59dc554e80520f0defc013c6ade7ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"520c2ffb334dc580a0ee96275c5c327d3161cd46343c8708676d1be638ed8f6a\"" Aug 13 07:12:14.847588 containerd[1469]: time="2025-08-13T07:12:14.847532006Z" level=info msg="StartContainer for \"520c2ffb334dc580a0ee96275c5c327d3161cd46343c8708676d1be638ed8f6a\"" Aug 13 07:12:14.867504 systemd[1]: Started cri-containerd-342f23fde26cbf1b7377a161417f9329dce63b81b52a5ecafc8c840ec2c6db7d.scope - libcontainer container 342f23fde26cbf1b7377a161417f9329dce63b81b52a5ecafc8c840ec2c6db7d. Aug 13 07:12:14.894031 kubelet[2127]: E0813 07:12:14.893732 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.36.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-2a2ab8bcea?timeout=10s\": dial tcp 137.184.36.62:6443: connect: connection refused" interval="1.6s" Aug 13 07:12:14.911468 systemd[1]: Started cri-containerd-4c5ef6161799ccf29df1564f2b68c49fe357bdebe901b5b83dbf78532e8b9af8.scope - libcontainer container 4c5ef6161799ccf29df1564f2b68c49fe357bdebe901b5b83dbf78532e8b9af8. Aug 13 07:12:14.922513 systemd[1]: Started cri-containerd-520c2ffb334dc580a0ee96275c5c327d3161cd46343c8708676d1be638ed8f6a.scope - libcontainer container 520c2ffb334dc580a0ee96275c5c327d3161cd46343c8708676d1be638ed8f6a. Aug 13 07:12:14.971376 containerd[1469]: time="2025-08-13T07:12:14.968909497Z" level=info msg="StartContainer for \"342f23fde26cbf1b7377a161417f9329dce63b81b52a5ecafc8c840ec2c6db7d\" returns successfully" Aug 13 07:12:15.018160 containerd[1469]: time="2025-08-13T07:12:15.018094817Z" level=info msg="StartContainer for \"4c5ef6161799ccf29df1564f2b68c49fe357bdebe901b5b83dbf78532e8b9af8\" returns successfully" Aug 13 07:12:15.020067 kubelet[2127]: W0813 07:12:15.019992 2127 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.36.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.36.62:6443: connect: connection refused Aug 13 07:12:15.020416 kubelet[2127]: E0813 07:12:15.020350 2127 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://137.184.36.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.36.62:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:12:15.054618 containerd[1469]: time="2025-08-13T07:12:15.054242952Z" level=info msg="StartContainer for \"520c2ffb334dc580a0ee96275c5c327d3161cd46343c8708676d1be638ed8f6a\" returns successfully" Aug 13 07:12:15.076319 kubelet[2127]: I0813 07:12:15.075451 2127 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:15.076319 kubelet[2127]: E0813 07:12:15.075793 2127 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://137.184.36.62:6443/api/v1/nodes\": dial tcp 137.184.36.62:6443: connect: connection refused" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:15.531083 kubelet[2127]: E0813 07:12:15.530973 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:15.534286 kubelet[2127]: E0813 07:12:15.533887 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:15.538091 kubelet[2127]: E0813 07:12:15.537995 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:16.540283 kubelet[2127]: E0813 07:12:16.540102 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:16.677999 kubelet[2127]: I0813 07:12:16.677332 2127 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:17.354481 kubelet[2127]: E0813 07:12:17.354165 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-a-2a2ab8bcea\" not found" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:17.447825 kubelet[2127]: I0813 07:12:17.447226 2127 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:17.447825 kubelet[2127]: E0813 07:12:17.447292 2127 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-a-2a2ab8bcea\": node \"ci-4081.3.5-a-2a2ab8bcea\" not found" Aug 13 07:12:17.470586 kubelet[2127]: E0813 07:12:17.470541 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2a2ab8bcea\" not found" Aug 13 07:12:17.519058 kubelet[2127]: E0813 07:12:17.518648 2127 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.5-a-2a2ab8bcea.185b4213ffa5b9de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-2a2ab8bcea,UID:ci-4081.3.5-a-2a2ab8bcea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-2a2ab8bcea,},FirstTimestamp:2025-08-13 07:12:13.451803102 +0000 UTC m=+0.425175702,LastTimestamp:2025-08-13 07:12:13.451803102 +0000 UTC m=+0.425175702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-2a2ab8bcea,}" Aug 13 07:12:17.571653 kubelet[2127]: E0813 07:12:17.571587 2127 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2a2ab8bcea\" not found" Aug 13 07:12:17.841437 kubelet[2127]: E0813 07:12:17.841284 2127 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:17.842693 kubelet[2127]: E0813 07:12:17.842638 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:18.457549 kubelet[2127]: I0813 07:12:18.457494 2127 apiserver.go:52] "Watching apiserver" Aug 13 07:12:18.477468 kubelet[2127]: I0813 07:12:18.477400 2127 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:12:19.482707 systemd[1]: Reloading requested from client PID 2400 ('systemctl') (unit session-7.scope)... Aug 13 07:12:19.482741 systemd[1]: Reloading... Aug 13 07:12:19.583208 zram_generator::config[2439]: No configuration found. Aug 13 07:12:19.723951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:12:19.827640 systemd[1]: Reloading finished in 344 ms. Aug 13 07:12:19.829443 kubelet[2127]: W0813 07:12:19.829404 2127 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:19.829874 kubelet[2127]: E0813 07:12:19.829759 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:19.872251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:19.887756 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:12:19.888054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:19.899629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:12:20.042442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:12:20.049907 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:12:20.164245 kubelet[2490]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:20.164245 kubelet[2490]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:12:20.164245 kubelet[2490]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:12:20.164245 kubelet[2490]: I0813 07:12:20.163453 2490 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:12:20.174885 kubelet[2490]: I0813 07:12:20.174784 2490 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:12:20.174885 kubelet[2490]: I0813 07:12:20.174830 2490 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:12:20.175958 kubelet[2490]: I0813 07:12:20.175339 2490 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:12:20.179350 kubelet[2490]: I0813 07:12:20.179283 2490 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:12:20.188205 kubelet[2490]: I0813 07:12:20.187760 2490 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:12:20.195745 kubelet[2490]: E0813 07:12:20.194422 2490 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:12:20.195745 kubelet[2490]: I0813 07:12:20.194462 2490 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:12:20.201371 kubelet[2490]: I0813 07:12:20.200811 2490 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:12:20.201810 kubelet[2490]: I0813 07:12:20.201792 2490 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:12:20.202829 kubelet[2490]: I0813 07:12:20.202781 2490 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:12:20.203268 kubelet[2490]: I0813 07:12:20.202980 2490 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-2a2ab8bcea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:12:20.203542 kubelet[2490]: I0813 07:12:20.203468 2490 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:12:20.203542 kubelet[2490]: I0813 07:12:20.203485 2490 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:12:20.204261 kubelet[2490]: I0813 07:12:20.203680 2490 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:20.204261 kubelet[2490]: I0813 07:12:20.203831 2490 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:12:20.204261 kubelet[2490]: I0813 07:12:20.203845 2490 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:12:20.204261 kubelet[2490]: I0813 07:12:20.203879 2490 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:12:20.204261 kubelet[2490]: I0813 07:12:20.203890 2490 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:12:20.207974 kubelet[2490]: I0813 07:12:20.207931 2490 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:12:20.208920 kubelet[2490]: I0813 07:12:20.208896 2490 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:12:20.209769 kubelet[2490]: I0813 07:12:20.209695 2490 server.go:1274] "Started kubelet" Aug 13 07:12:20.212469 kubelet[2490]: I0813 07:12:20.212448 2490 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:12:20.226916 kubelet[2490]: I0813 07:12:20.225799 2490 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:12:20.236395 kubelet[2490]: I0813 07:12:20.236080 2490 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:12:20.244916 kubelet[2490]: I0813 07:12:20.244872 2490 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:12:20.247631 kubelet[2490]: I0813 07:12:20.247429 2490 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:12:20.256773 kubelet[2490]: I0813 07:12:20.256378 2490 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:12:20.259966 kubelet[2490]: I0813 07:12:20.259750 2490 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:12:20.269298 kubelet[2490]: I0813 07:12:20.269220 2490 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:12:20.270243 kubelet[2490]: I0813 07:12:20.269703 2490 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:12:20.270674 kubelet[2490]: E0813 07:12:20.270650 2490 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-2a2ab8bcea\" not found" Aug 13 07:12:20.281899 kubelet[2490]: I0813 07:12:20.281858 2490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:12:20.284636 kubelet[2490]: I0813 07:12:20.284147 2490 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:12:20.284636 kubelet[2490]: I0813 07:12:20.284237 2490 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:12:20.284636 kubelet[2490]: I0813 07:12:20.284265 2490 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:12:20.284636 kubelet[2490]: E0813 07:12:20.284335 2490 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:12:20.285420 kubelet[2490]: I0813 07:12:20.285392 2490 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:12:20.285685 kubelet[2490]: I0813 07:12:20.285666 2490 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:12:20.289723 kubelet[2490]: E0813 07:12:20.289684 2490 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:12:20.290564 kubelet[2490]: I0813 07:12:20.290239 2490 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:12:20.385214 kubelet[2490]: E0813 07:12:20.384890 2490 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:12:20.399298 kubelet[2490]: I0813 07:12:20.398715 2490 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:12:20.399298 kubelet[2490]: I0813 07:12:20.398742 2490 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:12:20.399298 kubelet[2490]: I0813 07:12:20.398775 2490 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:12:20.399298 kubelet[2490]: I0813 07:12:20.399063 2490 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:12:20.399298 kubelet[2490]: I0813 07:12:20.399081 2490 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:12:20.399298 kubelet[2490]: I0813 07:12:20.399108 2490 policy_none.go:49] "None policy: Start" Aug 13 07:12:20.401232 kubelet[2490]: I0813 07:12:20.401128 2490 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:12:20.401232 kubelet[2490]: I0813 07:12:20.401194 2490 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:12:20.401494 kubelet[2490]: I0813 07:12:20.401470 2490 state_mem.go:75] "Updated machine memory state" Aug 13 07:12:20.419243 kubelet[2490]: I0813 07:12:20.416277 2490 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:12:20.419243 kubelet[2490]: I0813 07:12:20.416579 2490 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:12:20.419243 kubelet[2490]: I0813 07:12:20.416601 2490 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:12:20.419243 kubelet[2490]: I0813 07:12:20.416901 2490 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:12:20.530363 kubelet[2490]: I0813 07:12:20.529315 2490 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.544205 kubelet[2490]: I0813 07:12:20.544089 2490 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.544802 kubelet[2490]: I0813 07:12:20.544401 2490 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.595982 kubelet[2490]: W0813 07:12:20.595884 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:20.598764 kubelet[2490]: E0813 07:12:20.598626 2490 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.598895 kubelet[2490]: W0813 07:12:20.598464 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:20.599061 kubelet[2490]: W0813 07:12:20.598489 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:20.673298 kubelet[2490]: I0813 07:12:20.672715 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673298 kubelet[2490]: I0813 07:12:20.672760 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673298 kubelet[2490]: I0813 07:12:20.672784 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673298 kubelet[2490]: I0813 07:12:20.672806 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/859c8e70ee6283937bddb2a8bf5c9575-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"859c8e70ee6283937bddb2a8bf5c9575\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673298 kubelet[2490]: I0813 07:12:20.672825 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af150af67bc28a64ab21aa0d4370bbe5-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"af150af67bc28a64ab21aa0d4370bbe5\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673574 kubelet[2490]: I0813 07:12:20.672845 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af150af67bc28a64ab21aa0d4370bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"af150af67bc28a64ab21aa0d4370bbe5\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673574 kubelet[2490]: I0813 07:12:20.672865 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673574 kubelet[2490]: I0813 07:12:20.672879 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af150af67bc28a64ab21aa0d4370bbe5-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"af150af67bc28a64ab21aa0d4370bbe5\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.673574 kubelet[2490]: I0813 07:12:20.672896 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1ae69af200daa3220bd2bacc1d97c467-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea\" (UID: \"1ae69af200daa3220bd2bacc1d97c467\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:20.900102 kubelet[2490]: E0813 07:12:20.899070 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:20.900102 kubelet[2490]: E0813 07:12:20.899311 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:20.901327 kubelet[2490]: E0813 07:12:20.901148 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:21.206079 kubelet[2490]: I0813 07:12:21.205707 2490 apiserver.go:52] "Watching apiserver" Aug 13 07:12:21.270751 kubelet[2490]: I0813 07:12:21.270672 2490 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:12:21.340109 kubelet[2490]: E0813 07:12:21.340074 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:21.340709 kubelet[2490]: E0813 07:12:21.340335 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:21.356129 kubelet[2490]: W0813 07:12:21.356063 2490 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:12:21.356551 kubelet[2490]: E0813 07:12:21.356157 2490 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-a-2a2ab8bcea\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:12:21.356551 kubelet[2490]: E0813 07:12:21.356397 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:21.397912 kubelet[2490]: I0813 07:12:21.397588 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-a-2a2ab8bcea" podStartSLOduration=2.39752406 podStartE2EDuration="2.39752406s" podCreationTimestamp="2025-08-13 07:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:21.392889552 +0000 UTC m=+1.333348404" watchObservedRunningTime="2025-08-13 07:12:21.39752406 +0000 UTC m=+1.337982908" Aug 13 07:12:21.416676 kubelet[2490]: I0813 07:12:21.416608 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-2a2ab8bcea" podStartSLOduration=1.416590187 podStartE2EDuration="1.416590187s" podCreationTimestamp="2025-08-13 07:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:21.416584172 +0000 UTC m=+1.357043026" watchObservedRunningTime="2025-08-13 07:12:21.416590187 +0000 UTC m=+1.357049037" Aug 13 07:12:21.416931 kubelet[2490]: I0813 07:12:21.416709 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-a-2a2ab8bcea" podStartSLOduration=1.416688922 podStartE2EDuration="1.416688922s" podCreationTimestamp="2025-08-13 07:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:21.405280636 +0000 UTC m=+1.345739488" watchObservedRunningTime="2025-08-13 07:12:21.416688922 +0000 UTC m=+1.357147775" Aug 13 07:12:22.341918 kubelet[2490]: E0813 07:12:22.341468 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:24.693959 kubelet[2490]: I0813 07:12:24.693920 2490 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:12:24.698602 containerd[1469]: time="2025-08-13T07:12:24.698510872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:12:24.699329 kubelet[2490]: I0813 07:12:24.699047 2490 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:12:25.310510 kubelet[2490]: E0813 07:12:25.310036 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:25.348969 kubelet[2490]: E0813 07:12:25.348800 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:25.572802 systemd[1]: Created slice kubepods-besteffort-poda8a7c6ad_4fb2_4a14_8feb_e1c573478142.slice - libcontainer container kubepods-besteffort-poda8a7c6ad_4fb2_4a14_8feb_e1c573478142.slice. Aug 13 07:12:25.608716 kubelet[2490]: I0813 07:12:25.608522 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a8a7c6ad-4fb2-4a14-8feb-e1c573478142-kube-proxy\") pod \"kube-proxy-p9mnn\" (UID: \"a8a7c6ad-4fb2-4a14-8feb-e1c573478142\") " pod="kube-system/kube-proxy-p9mnn" Aug 13 07:12:25.608716 kubelet[2490]: I0813 07:12:25.608584 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8a7c6ad-4fb2-4a14-8feb-e1c573478142-lib-modules\") pod \"kube-proxy-p9mnn\" (UID: \"a8a7c6ad-4fb2-4a14-8feb-e1c573478142\") " pod="kube-system/kube-proxy-p9mnn" Aug 13 07:12:25.608716 kubelet[2490]: I0813 07:12:25.608607 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8a7c6ad-4fb2-4a14-8feb-e1c573478142-xtables-lock\") pod \"kube-proxy-p9mnn\" (UID: \"a8a7c6ad-4fb2-4a14-8feb-e1c573478142\") " pod="kube-system/kube-proxy-p9mnn" Aug 13 07:12:25.608716 kubelet[2490]: I0813 07:12:25.608625 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q545\" (UniqueName: \"kubernetes.io/projected/a8a7c6ad-4fb2-4a14-8feb-e1c573478142-kube-api-access-8q545\") pod \"kube-proxy-p9mnn\" (UID: \"a8a7c6ad-4fb2-4a14-8feb-e1c573478142\") " pod="kube-system/kube-proxy-p9mnn" Aug 13 07:12:25.853681 systemd[1]: Created slice kubepods-besteffort-pod1c7a21de_1271_403f_8a2b_a4eb91cb1d8b.slice - libcontainer container kubepods-besteffort-pod1c7a21de_1271_403f_8a2b_a4eb91cb1d8b.slice. Aug 13 07:12:25.885704 kubelet[2490]: E0813 07:12:25.884896 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:25.886128 containerd[1469]: time="2025-08-13T07:12:25.885845563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9mnn,Uid:a8a7c6ad-4fb2-4a14-8feb-e1c573478142,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:25.910318 kubelet[2490]: I0813 07:12:25.910115 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdjc\" (UniqueName: \"kubernetes.io/projected/1c7a21de-1271-403f-8a2b-a4eb91cb1d8b-kube-api-access-qwdjc\") pod \"tigera-operator-5bf8dfcb4-q9tk6\" (UID: \"1c7a21de-1271-403f-8a2b-a4eb91cb1d8b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-q9tk6" Aug 13 07:12:25.910318 kubelet[2490]: I0813 07:12:25.910198 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c7a21de-1271-403f-8a2b-a4eb91cb1d8b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-q9tk6\" (UID: \"1c7a21de-1271-403f-8a2b-a4eb91cb1d8b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-q9tk6" Aug 13 07:12:25.918610 containerd[1469]: time="2025-08-13T07:12:25.918224820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:25.918610 containerd[1469]: time="2025-08-13T07:12:25.918292155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:25.918610 containerd[1469]: time="2025-08-13T07:12:25.918314564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:25.918610 containerd[1469]: time="2025-08-13T07:12:25.918411310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:25.947547 systemd[1]: Started cri-containerd-f4ff3243bdce3d39bd06dead18c071a46d9d49d53829f46b37e03db28ac753cf.scope - libcontainer container f4ff3243bdce3d39bd06dead18c071a46d9d49d53829f46b37e03db28ac753cf. Aug 13 07:12:25.977732 containerd[1469]: time="2025-08-13T07:12:25.977310432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9mnn,Uid:a8a7c6ad-4fb2-4a14-8feb-e1c573478142,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4ff3243bdce3d39bd06dead18c071a46d9d49d53829f46b37e03db28ac753cf\"" Aug 13 07:12:25.979475 kubelet[2490]: E0813 07:12:25.978417 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:25.984621 containerd[1469]: time="2025-08-13T07:12:25.984559871Z" level=info msg="CreateContainer within sandbox \"f4ff3243bdce3d39bd06dead18c071a46d9d49d53829f46b37e03db28ac753cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:12:26.006106 containerd[1469]: time="2025-08-13T07:12:26.006044836Z" level=info msg="CreateContainer within sandbox \"f4ff3243bdce3d39bd06dead18c071a46d9d49d53829f46b37e03db28ac753cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b89ba1474e950b7eae4fa04967606558ad2b6bde145e4cfdd5f7748d3a151a8a\"" Aug 13 07:12:26.007044 containerd[1469]: time="2025-08-13T07:12:26.007000673Z" level=info msg="StartContainer for \"b89ba1474e950b7eae4fa04967606558ad2b6bde145e4cfdd5f7748d3a151a8a\"" Aug 13 07:12:26.044474 systemd[1]: Started cri-containerd-b89ba1474e950b7eae4fa04967606558ad2b6bde145e4cfdd5f7748d3a151a8a.scope - libcontainer container b89ba1474e950b7eae4fa04967606558ad2b6bde145e4cfdd5f7748d3a151a8a. Aug 13 07:12:26.079425 containerd[1469]: time="2025-08-13T07:12:26.079237463Z" level=info msg="StartContainer for \"b89ba1474e950b7eae4fa04967606558ad2b6bde145e4cfdd5f7748d3a151a8a\" returns successfully" Aug 13 07:12:26.158879 containerd[1469]: time="2025-08-13T07:12:26.158427233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-q9tk6,Uid:1c7a21de-1271-403f-8a2b-a4eb91cb1d8b,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:12:26.195086 containerd[1469]: time="2025-08-13T07:12:26.194945782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:26.195976 containerd[1469]: time="2025-08-13T07:12:26.195907336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:26.196376 containerd[1469]: time="2025-08-13T07:12:26.196239946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.196731 containerd[1469]: time="2025-08-13T07:12:26.196628270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:26.223285 systemd[1]: Started cri-containerd-6351f748834af12a5029a428fbc47eaf2743af3b2a09b7e28c3b9674bf7fd087.scope - libcontainer container 6351f748834af12a5029a428fbc47eaf2743af3b2a09b7e28c3b9674bf7fd087. Aug 13 07:12:26.295069 containerd[1469]: time="2025-08-13T07:12:26.294512640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-q9tk6,Uid:1c7a21de-1271-403f-8a2b-a4eb91cb1d8b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6351f748834af12a5029a428fbc47eaf2743af3b2a09b7e28c3b9674bf7fd087\"" Aug 13 07:12:26.300965 containerd[1469]: time="2025-08-13T07:12:26.300440739Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:12:26.366511 kubelet[2490]: E0813 07:12:26.366401 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:26.381910 kubelet[2490]: I0813 07:12:26.381836 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p9mnn" podStartSLOduration=1.381819522 podStartE2EDuration="1.381819522s" podCreationTimestamp="2025-08-13 07:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:12:26.381634036 +0000 UTC m=+6.322092889" watchObservedRunningTime="2025-08-13 07:12:26.381819522 +0000 UTC m=+6.322278374" Aug 13 07:12:26.538883 kubelet[2490]: E0813 07:12:26.538741 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:26.737241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786913814.mount: Deactivated successfully. Aug 13 07:12:27.369556 kubelet[2490]: E0813 07:12:27.369428 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:28.327026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858760689.mount: Deactivated successfully. Aug 13 07:12:29.846685 kubelet[2490]: E0813 07:12:29.846628 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:30.375368 kubelet[2490]: E0813 07:12:30.375312 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:30.626257 containerd[1469]: time="2025-08-13T07:12:30.625357015Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.626737 containerd[1469]: time="2025-08-13T07:12:30.626479830Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:12:30.631382 containerd[1469]: time="2025-08-13T07:12:30.629212909Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.643598 containerd[1469]: time="2025-08-13T07:12:30.643522963Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:30.646075 containerd[1469]: time="2025-08-13T07:12:30.645941524Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 4.345189865s" Aug 13 07:12:30.646075 containerd[1469]: time="2025-08-13T07:12:30.646007876Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:12:30.651046 containerd[1469]: time="2025-08-13T07:12:30.650990118Z" level=info msg="CreateContainer within sandbox \"6351f748834af12a5029a428fbc47eaf2743af3b2a09b7e28c3b9674bf7fd087\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:12:30.675603 containerd[1469]: time="2025-08-13T07:12:30.675531204Z" level=info msg="CreateContainer within sandbox \"6351f748834af12a5029a428fbc47eaf2743af3b2a09b7e28c3b9674bf7fd087\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f24d7d0d8a4112d7ccf63584f5ab589263cb7977190f870517805dd7f7d4fa2\"" Aug 13 07:12:30.678903 containerd[1469]: time="2025-08-13T07:12:30.676706926Z" level=info msg="StartContainer for \"1f24d7d0d8a4112d7ccf63584f5ab589263cb7977190f870517805dd7f7d4fa2\"" Aug 13 07:12:30.714486 systemd[1]: run-containerd-runc-k8s.io-1f24d7d0d8a4112d7ccf63584f5ab589263cb7977190f870517805dd7f7d4fa2-runc.SlEz4r.mount: Deactivated successfully. Aug 13 07:12:30.721484 systemd[1]: Started cri-containerd-1f24d7d0d8a4112d7ccf63584f5ab589263cb7977190f870517805dd7f7d4fa2.scope - libcontainer container 1f24d7d0d8a4112d7ccf63584f5ab589263cb7977190f870517805dd7f7d4fa2. Aug 13 07:12:30.759216 containerd[1469]: time="2025-08-13T07:12:30.758614531Z" level=info msg="StartContainer for \"1f24d7d0d8a4112d7ccf63584f5ab589263cb7977190f870517805dd7f7d4fa2\" returns successfully" Aug 13 07:12:35.503332 update_engine[1447]: I20250813 07:12:35.502434 1447 update_attempter.cc:509] Updating boot flags... Aug 13 07:12:35.553508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2864) Aug 13 07:12:35.655333 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2865) Aug 13 07:12:35.758361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2865) Aug 13 07:12:37.936617 sudo[1651]: pam_unix(sudo:session): session closed for user root Aug 13 07:12:37.944362 sshd[1648]: pam_unix(sshd:session): session closed for user core Aug 13 07:12:37.951824 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:12:37.952083 systemd[1]: sshd@6-137.184.36.62:22-139.178.89.65:47108.service: Deactivated successfully. Aug 13 07:12:37.954385 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:12:37.954561 systemd[1]: session-7.scope: Consumed 5.936s CPU time, 144.0M memory peak, 0B memory swap peak. Aug 13 07:12:37.958314 systemd-logind[1446]: Removed session 7. Aug 13 07:12:42.330390 kubelet[2490]: I0813 07:12:42.330311 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-q9tk6" podStartSLOduration=12.981335599 podStartE2EDuration="17.330290561s" podCreationTimestamp="2025-08-13 07:12:25 +0000 UTC" firstStartedPulling="2025-08-13 07:12:26.298382479 +0000 UTC m=+6.238841311" lastFinishedPulling="2025-08-13 07:12:30.647337424 +0000 UTC m=+10.587796273" observedRunningTime="2025-08-13 07:12:31.391720838 +0000 UTC m=+11.332179672" watchObservedRunningTime="2025-08-13 07:12:42.330290561 +0000 UTC m=+22.270749413" Aug 13 07:12:42.354846 systemd[1]: Created slice kubepods-besteffort-pod60637269_7a7a_4bd8_b69b_a8c47b0e765e.slice - libcontainer container kubepods-besteffort-pod60637269_7a7a_4bd8_b69b_a8c47b0e765e.slice. Aug 13 07:12:42.425012 kubelet[2490]: I0813 07:12:42.424919 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60637269-7a7a-4bd8-b69b-a8c47b0e765e-tigera-ca-bundle\") pod \"calico-typha-865799d9d7-759wc\" (UID: \"60637269-7a7a-4bd8-b69b-a8c47b0e765e\") " pod="calico-system/calico-typha-865799d9d7-759wc" Aug 13 07:12:42.425012 kubelet[2490]: I0813 07:12:42.425008 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/60637269-7a7a-4bd8-b69b-a8c47b0e765e-typha-certs\") pod \"calico-typha-865799d9d7-759wc\" (UID: \"60637269-7a7a-4bd8-b69b-a8c47b0e765e\") " pod="calico-system/calico-typha-865799d9d7-759wc" Aug 13 07:12:42.425012 kubelet[2490]: I0813 07:12:42.425028 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmf6p\" (UniqueName: \"kubernetes.io/projected/60637269-7a7a-4bd8-b69b-a8c47b0e765e-kube-api-access-pmf6p\") pod \"calico-typha-865799d9d7-759wc\" (UID: \"60637269-7a7a-4bd8-b69b-a8c47b0e765e\") " pod="calico-system/calico-typha-865799d9d7-759wc" Aug 13 07:12:42.671919 kubelet[2490]: E0813 07:12:42.671869 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:42.673032 containerd[1469]: time="2025-08-13T07:12:42.672892566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-865799d9d7-759wc,Uid:60637269-7a7a-4bd8-b69b-a8c47b0e765e,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:42.722735 systemd[1]: Created slice kubepods-besteffort-pod89679830_04c2_4ee3_8923_7aae4799e683.slice - libcontainer container kubepods-besteffort-pod89679830_04c2_4ee3_8923_7aae4799e683.slice. Aug 13 07:12:42.727785 kubelet[2490]: I0813 07:12:42.726781 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-flexvol-driver-host\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.729852 kubelet[2490]: I0813 07:12:42.729693 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bwt\" (UniqueName: \"kubernetes.io/projected/89679830-04c2-4ee3-8923-7aae4799e683-kube-api-access-s6bwt\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730008 kubelet[2490]: I0813 07:12:42.729883 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-var-run-calico\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730008 kubelet[2490]: I0813 07:12:42.729938 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-xtables-lock\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730008 kubelet[2490]: I0813 07:12:42.729968 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89679830-04c2-4ee3-8923-7aae4799e683-tigera-ca-bundle\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730008 kubelet[2490]: I0813 07:12:42.729994 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-cni-bin-dir\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730126 kubelet[2490]: I0813 07:12:42.730019 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-policysync\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730126 kubelet[2490]: I0813 07:12:42.730042 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/89679830-04c2-4ee3-8923-7aae4799e683-node-certs\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730126 kubelet[2490]: I0813 07:12:42.730074 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-cni-net-dir\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.730126 kubelet[2490]: I0813 07:12:42.730103 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-lib-modules\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.732305 kubelet[2490]: I0813 07:12:42.730127 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-var-lib-calico\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.732305 kubelet[2490]: I0813 07:12:42.730155 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/89679830-04c2-4ee3-8923-7aae4799e683-cni-log-dir\") pod \"calico-node-xbdpl\" (UID: \"89679830-04c2-4ee3-8923-7aae4799e683\") " pod="calico-system/calico-node-xbdpl" Aug 13 07:12:42.747495 containerd[1469]: time="2025-08-13T07:12:42.747330272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:42.747495 containerd[1469]: time="2025-08-13T07:12:42.747412569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:42.747495 containerd[1469]: time="2025-08-13T07:12:42.747424890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:42.747900 containerd[1469]: time="2025-08-13T07:12:42.747540753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:42.801420 systemd[1]: Started cri-containerd-1cfa5b6cdc78ef1a6d74d286b150a93567d6c3d6751be9b810e8212122e6a6af.scope - libcontainer container 1cfa5b6cdc78ef1a6d74d286b150a93567d6c3d6751be9b810e8212122e6a6af. Aug 13 07:12:42.839168 kubelet[2490]: E0813 07:12:42.839121 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:42.839168 kubelet[2490]: W0813 07:12:42.839149 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:42.840249 kubelet[2490]: E0813 07:12:42.840210 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:42.857232 kubelet[2490]: E0813 07:12:42.857191 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:42.857232 kubelet[2490]: W0813 07:12:42.857216 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:42.857232 kubelet[2490]: E0813 07:12:42.857242 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:42.964030 kubelet[2490]: E0813 07:12:42.963144 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:43.028913 kubelet[2490]: E0813 07:12:43.028861 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.028913 kubelet[2490]: W0813 07:12:43.028891 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.028913 kubelet[2490]: E0813 07:12:43.028917 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.029226 kubelet[2490]: E0813 07:12:43.029206 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.029274 kubelet[2490]: W0813 07:12:43.029225 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.029274 kubelet[2490]: E0813 07:12:43.029244 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.029514 kubelet[2490]: E0813 07:12:43.029497 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.029514 kubelet[2490]: W0813 07:12:43.029513 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.029514 kubelet[2490]: E0813 07:12:43.029524 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.029725 kubelet[2490]: E0813 07:12:43.029710 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.029725 kubelet[2490]: W0813 07:12:43.029723 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.029797 kubelet[2490]: E0813 07:12:43.029732 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.029955 kubelet[2490]: E0813 07:12:43.029920 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.029955 kubelet[2490]: W0813 07:12:43.029948 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.030180 kubelet[2490]: E0813 07:12:43.029962 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.030243 kubelet[2490]: E0813 07:12:43.030215 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.030243 kubelet[2490]: W0813 07:12:43.030229 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.030414 kubelet[2490]: E0813 07:12:43.030242 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.030462 kubelet[2490]: E0813 07:12:43.030424 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.030462 kubelet[2490]: W0813 07:12:43.030434 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.030462 kubelet[2490]: E0813 07:12:43.030443 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.030686 kubelet[2490]: E0813 07:12:43.030670 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.030686 kubelet[2490]: W0813 07:12:43.030683 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.030761 kubelet[2490]: E0813 07:12:43.030692 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.033526 kubelet[2490]: E0813 07:12:43.033325 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.033526 kubelet[2490]: W0813 07:12:43.033354 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.033526 kubelet[2490]: E0813 07:12:43.033380 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.033890 kubelet[2490]: E0813 07:12:43.033765 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.033890 kubelet[2490]: W0813 07:12:43.033779 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.033890 kubelet[2490]: E0813 07:12:43.033798 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.034127 kubelet[2490]: E0813 07:12:43.034114 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.034306 kubelet[2490]: W0813 07:12:43.034186 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.034306 kubelet[2490]: E0813 07:12:43.034203 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.034573 kubelet[2490]: E0813 07:12:43.034432 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.034573 kubelet[2490]: W0813 07:12:43.034443 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.034573 kubelet[2490]: E0813 07:12:43.034454 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.034909 kubelet[2490]: E0813 07:12:43.034843 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.034909 kubelet[2490]: W0813 07:12:43.034854 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.034909 kubelet[2490]: E0813 07:12:43.034868 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.036366 kubelet[2490]: E0813 07:12:43.036276 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.036366 kubelet[2490]: W0813 07:12:43.036295 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.036366 kubelet[2490]: E0813 07:12:43.036310 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.038013 kubelet[2490]: E0813 07:12:43.037793 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.038013 kubelet[2490]: W0813 07:12:43.037861 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.038013 kubelet[2490]: E0813 07:12:43.037877 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.039945 kubelet[2490]: E0813 07:12:43.039249 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.039945 kubelet[2490]: W0813 07:12:43.039272 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.039945 kubelet[2490]: E0813 07:12:43.039295 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.039945 kubelet[2490]: E0813 07:12:43.039790 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.039945 kubelet[2490]: W0813 07:12:43.039810 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.039945 kubelet[2490]: E0813 07:12:43.039827 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.040358 containerd[1469]: time="2025-08-13T07:12:43.040162542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xbdpl,Uid:89679830-04c2-4ee3-8923-7aae4799e683,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:43.042249 kubelet[2490]: E0813 07:12:43.041829 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.042249 kubelet[2490]: W0813 07:12:43.041894 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.042249 kubelet[2490]: E0813 07:12:43.041916 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.042680 kubelet[2490]: E0813 07:12:43.042395 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.042680 kubelet[2490]: W0813 07:12:43.042409 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.042680 kubelet[2490]: E0813 07:12:43.042426 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.043933 kubelet[2490]: E0813 07:12:43.043905 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.043933 kubelet[2490]: W0813 07:12:43.043926 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.044278 kubelet[2490]: E0813 07:12:43.043945 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.047294 kubelet[2490]: E0813 07:12:43.047242 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.047294 kubelet[2490]: W0813 07:12:43.047270 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.047294 kubelet[2490]: E0813 07:12:43.047296 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.047810 kubelet[2490]: I0813 07:12:43.047335 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2bx\" (UniqueName: \"kubernetes.io/projected/2c51012b-f93a-4143-8128-6925ff4126f6-kube-api-access-gs2bx\") pod \"csi-node-driver-5l6lh\" (UID: \"2c51012b-f93a-4143-8128-6925ff4126f6\") " pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:43.049875 kubelet[2490]: E0813 07:12:43.049835 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.049875 kubelet[2490]: W0813 07:12:43.049866 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.050292 kubelet[2490]: E0813 07:12:43.049898 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.050292 kubelet[2490]: I0813 07:12:43.049944 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c51012b-f93a-4143-8128-6925ff4126f6-registration-dir\") pod \"csi-node-driver-5l6lh\" (UID: \"2c51012b-f93a-4143-8128-6925ff4126f6\") " pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:43.051654 kubelet[2490]: E0813 07:12:43.050624 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.051654 kubelet[2490]: W0813 07:12:43.050656 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.051654 kubelet[2490]: E0813 07:12:43.050810 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.051654 kubelet[2490]: I0813 07:12:43.050849 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c51012b-f93a-4143-8128-6925ff4126f6-kubelet-dir\") pod \"csi-node-driver-5l6lh\" (UID: \"2c51012b-f93a-4143-8128-6925ff4126f6\") " pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:43.052652 kubelet[2490]: E0813 07:12:43.052303 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.052652 kubelet[2490]: W0813 07:12:43.052335 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.053533 kubelet[2490]: E0813 07:12:43.053031 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.055624 kubelet[2490]: E0813 07:12:43.055315 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.055624 kubelet[2490]: W0813 07:12:43.055341 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.056102 kubelet[2490]: E0813 07:12:43.055872 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.056207 kubelet[2490]: E0813 07:12:43.056161 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.056343 kubelet[2490]: W0813 07:12:43.056204 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.056343 kubelet[2490]: E0813 07:12:43.056245 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.056631 kubelet[2490]: I0813 07:12:43.056596 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2c51012b-f93a-4143-8128-6925ff4126f6-varrun\") pod \"csi-node-driver-5l6lh\" (UID: \"2c51012b-f93a-4143-8128-6925ff4126f6\") " pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:43.056631 kubelet[2490]: E0813 07:12:43.056619 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.056631 kubelet[2490]: W0813 07:12:43.056631 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.056852 kubelet[2490]: E0813 07:12:43.056670 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.058918 kubelet[2490]: E0813 07:12:43.057573 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.058918 kubelet[2490]: W0813 07:12:43.057598 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.058918 kubelet[2490]: E0813 07:12:43.057619 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.059154 kubelet[2490]: E0813 07:12:43.058962 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.059154 kubelet[2490]: W0813 07:12:43.059001 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.059154 kubelet[2490]: E0813 07:12:43.059024 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.060424 kubelet[2490]: E0813 07:12:43.059859 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.060424 kubelet[2490]: W0813 07:12:43.059878 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.060424 kubelet[2490]: E0813 07:12:43.059894 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.064332 kubelet[2490]: E0813 07:12:43.062783 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.064332 kubelet[2490]: W0813 07:12:43.062824 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.064332 kubelet[2490]: E0813 07:12:43.062857 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.064696 kubelet[2490]: E0813 07:12:43.064425 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.064696 kubelet[2490]: W0813 07:12:43.064486 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.064696 kubelet[2490]: E0813 07:12:43.064512 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.067442 kubelet[2490]: E0813 07:12:43.067367 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.067629 kubelet[2490]: W0813 07:12:43.067535 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.067629 kubelet[2490]: E0813 07:12:43.067578 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.067754 kubelet[2490]: I0813 07:12:43.067646 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c51012b-f93a-4143-8128-6925ff4126f6-socket-dir\") pod \"csi-node-driver-5l6lh\" (UID: \"2c51012b-f93a-4143-8128-6925ff4126f6\") " pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:43.069919 kubelet[2490]: E0813 07:12:43.068131 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.069919 kubelet[2490]: W0813 07:12:43.068162 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.069919 kubelet[2490]: E0813 07:12:43.068220 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.070259 kubelet[2490]: E0813 07:12:43.070033 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.070259 kubelet[2490]: W0813 07:12:43.070070 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.070259 kubelet[2490]: E0813 07:12:43.070112 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.129218 containerd[1469]: time="2025-08-13T07:12:43.128138812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:12:43.129218 containerd[1469]: time="2025-08-13T07:12:43.128259714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:12:43.129218 containerd[1469]: time="2025-08-13T07:12:43.128289770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:43.129218 containerd[1469]: time="2025-08-13T07:12:43.128438532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:12:43.136496 containerd[1469]: time="2025-08-13T07:12:43.134721994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-865799d9d7-759wc,Uid:60637269-7a7a-4bd8-b69b-a8c47b0e765e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cfa5b6cdc78ef1a6d74d286b150a93567d6c3d6751be9b810e8212122e6a6af\"" Aug 13 07:12:43.143302 kubelet[2490]: E0813 07:12:43.143143 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:43.154568 containerd[1469]: time="2025-08-13T07:12:43.154317872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:12:43.170769 kubelet[2490]: E0813 07:12:43.170731 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.170769 kubelet[2490]: W0813 07:12:43.170763 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.172388 kubelet[2490]: E0813 07:12:43.170793 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.173364 kubelet[2490]: E0813 07:12:43.173323 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.173364 kubelet[2490]: W0813 07:12:43.173354 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.173716 kubelet[2490]: E0813 07:12:43.173394 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.174664 kubelet[2490]: E0813 07:12:43.174527 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.174664 kubelet[2490]: W0813 07:12:43.174550 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.174664 kubelet[2490]: E0813 07:12:43.174590 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.176025 kubelet[2490]: E0813 07:12:43.175816 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.176025 kubelet[2490]: W0813 07:12:43.175848 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.176025 kubelet[2490]: E0813 07:12:43.175870 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.177360 kubelet[2490]: E0813 07:12:43.177163 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.177360 kubelet[2490]: W0813 07:12:43.177291 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.179078 kubelet[2490]: E0813 07:12:43.178784 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.180307 kubelet[2490]: E0813 07:12:43.179914 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.180307 kubelet[2490]: W0813 07:12:43.179942 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.180741 kubelet[2490]: E0813 07:12:43.180707 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.181194 kubelet[2490]: E0813 07:12:43.181036 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.181194 kubelet[2490]: W0813 07:12:43.181051 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.181194 kubelet[2490]: E0813 07:12:43.181075 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.181553 systemd[1]: Started cri-containerd-a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e.scope - libcontainer container a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e. Aug 13 07:12:43.183212 kubelet[2490]: E0813 07:12:43.182954 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.183212 kubelet[2490]: W0813 07:12:43.182974 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.184385 kubelet[2490]: E0813 07:12:43.184168 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.185815 kubelet[2490]: E0813 07:12:43.185473 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.185815 kubelet[2490]: W0813 07:12:43.185540 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.185815 kubelet[2490]: E0813 07:12:43.185712 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.187185 kubelet[2490]: E0813 07:12:43.186871 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.187185 kubelet[2490]: W0813 07:12:43.186890 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.187185 kubelet[2490]: E0813 07:12:43.186957 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.187663 kubelet[2490]: E0813 07:12:43.187597 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.187663 kubelet[2490]: W0813 07:12:43.187613 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.188019 kubelet[2490]: E0813 07:12:43.187822 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.188414 kubelet[2490]: E0813 07:12:43.188266 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.188414 kubelet[2490]: W0813 07:12:43.188280 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.188414 kubelet[2490]: E0813 07:12:43.188322 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.189277 kubelet[2490]: E0813 07:12:43.189022 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.189277 kubelet[2490]: W0813 07:12:43.189047 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.189277 kubelet[2490]: E0813 07:12:43.189105 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.190384 kubelet[2490]: E0813 07:12:43.190337 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.190384 kubelet[2490]: W0813 07:12:43.190350 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.191084 kubelet[2490]: E0813 07:12:43.191049 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.191306 kubelet[2490]: E0813 07:12:43.191221 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.191306 kubelet[2490]: W0813 07:12:43.191238 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.191489 kubelet[2490]: E0813 07:12:43.191373 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.192022 kubelet[2490]: E0813 07:12:43.191788 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.192022 kubelet[2490]: W0813 07:12:43.191804 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.193029 kubelet[2490]: E0813 07:12:43.192250 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.193452 kubelet[2490]: E0813 07:12:43.193432 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.193452 kubelet[2490]: W0813 07:12:43.193446 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.194103 kubelet[2490]: E0813 07:12:43.193884 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.194103 kubelet[2490]: E0813 07:12:43.194069 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.194103 kubelet[2490]: W0813 07:12:43.194080 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.194651 kubelet[2490]: E0813 07:12:43.194404 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.194651 kubelet[2490]: W0813 07:12:43.194418 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.195708 kubelet[2490]: E0813 07:12:43.194991 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.195708 kubelet[2490]: E0813 07:12:43.195069 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.196479 kubelet[2490]: E0813 07:12:43.196330 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.196479 kubelet[2490]: W0813 07:12:43.196473 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.196597 kubelet[2490]: E0813 07:12:43.196536 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.197418 kubelet[2490]: E0813 07:12:43.197127 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.197418 kubelet[2490]: W0813 07:12:43.197142 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.200258 kubelet[2490]: E0813 07:12:43.200215 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.203480 kubelet[2490]: E0813 07:12:43.203264 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.203480 kubelet[2490]: W0813 07:12:43.203299 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.204338 kubelet[2490]: E0813 07:12:43.204300 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.205802 kubelet[2490]: W0813 07:12:43.204805 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.206152 kubelet[2490]: E0813 07:12:43.206132 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.206303 kubelet[2490]: W0813 07:12:43.206269 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.206872 kubelet[2490]: E0813 07:12:43.206424 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.207721 kubelet[2490]: E0813 07:12:43.207652 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.216461 kubelet[2490]: E0813 07:12:43.216311 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.218321 kubelet[2490]: E0813 07:12:43.218291 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.218973 kubelet[2490]: W0813 07:12:43.218606 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.218973 kubelet[2490]: E0813 07:12:43.218652 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.226770 kubelet[2490]: E0813 07:12:43.226724 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:43.226770 kubelet[2490]: W0813 07:12:43.226754 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:43.227377 kubelet[2490]: E0813 07:12:43.226790 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:43.258045 containerd[1469]: time="2025-08-13T07:12:43.257903454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xbdpl,Uid:89679830-04c2-4ee3-8923-7aae4799e683,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\"" Aug 13 07:12:43.549258 systemd[1]: run-containerd-runc-k8s.io-1cfa5b6cdc78ef1a6d74d286b150a93567d6c3d6751be9b810e8212122e6a6af-runc.BSqcdy.mount: Deactivated successfully. Aug 13 07:12:44.551712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917966177.mount: Deactivated successfully. Aug 13 07:12:45.288012 kubelet[2490]: E0813 07:12:45.287906 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:46.092721 containerd[1469]: time="2025-08-13T07:12:46.092618084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:46.094765 containerd[1469]: time="2025-08-13T07:12:46.094612830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 07:12:46.095608 containerd[1469]: time="2025-08-13T07:12:46.095404851Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:46.099260 containerd[1469]: time="2025-08-13T07:12:46.098663272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:46.100078 containerd[1469]: time="2025-08-13T07:12:46.100045809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.945689184s" Aug 13 07:12:46.100282 containerd[1469]: time="2025-08-13T07:12:46.100265442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:12:46.105885 containerd[1469]: time="2025-08-13T07:12:46.105301762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:12:46.142120 containerd[1469]: time="2025-08-13T07:12:46.142072793Z" level=info msg="CreateContainer within sandbox \"1cfa5b6cdc78ef1a6d74d286b150a93567d6c3d6751be9b810e8212122e6a6af\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:12:46.161484 containerd[1469]: time="2025-08-13T07:12:46.161435939Z" level=info msg="CreateContainer within sandbox \"1cfa5b6cdc78ef1a6d74d286b150a93567d6c3d6751be9b810e8212122e6a6af\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d72d7686e016a4ecacbee597771a7b174bff5993fd96af599fe36d0ed7113413\"" Aug 13 07:12:46.164987 containerd[1469]: time="2025-08-13T07:12:46.164934431Z" level=info msg="StartContainer for \"d72d7686e016a4ecacbee597771a7b174bff5993fd96af599fe36d0ed7113413\"" Aug 13 07:12:46.256016 systemd[1]: Started cri-containerd-d72d7686e016a4ecacbee597771a7b174bff5993fd96af599fe36d0ed7113413.scope - libcontainer container d72d7686e016a4ecacbee597771a7b174bff5993fd96af599fe36d0ed7113413. Aug 13 07:12:46.335856 containerd[1469]: time="2025-08-13T07:12:46.335721311Z" level=info msg="StartContainer for \"d72d7686e016a4ecacbee597771a7b174bff5993fd96af599fe36d0ed7113413\" returns successfully" Aug 13 07:12:46.445092 kubelet[2490]: E0813 07:12:46.445006 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:46.480807 kubelet[2490]: E0813 07:12:46.480492 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.480807 kubelet[2490]: W0813 07:12:46.480526 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.480807 kubelet[2490]: E0813 07:12:46.480559 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.481466 kubelet[2490]: E0813 07:12:46.480998 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.481466 kubelet[2490]: W0813 07:12:46.481014 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.481466 kubelet[2490]: E0813 07:12:46.481035 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.483694 kubelet[2490]: E0813 07:12:46.483462 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.483694 kubelet[2490]: W0813 07:12:46.483494 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.483694 kubelet[2490]: E0813 07:12:46.483523 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.484222 kubelet[2490]: E0813 07:12:46.484066 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.484222 kubelet[2490]: W0813 07:12:46.484086 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.484222 kubelet[2490]: E0813 07:12:46.484109 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.485616 kubelet[2490]: E0813 07:12:46.485488 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.485616 kubelet[2490]: W0813 07:12:46.485517 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.485616 kubelet[2490]: E0813 07:12:46.485540 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.486334 kubelet[2490]: E0813 07:12:46.486156 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.486973 kubelet[2490]: W0813 07:12:46.486726 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.486973 kubelet[2490]: E0813 07:12:46.486801 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.489347 kubelet[2490]: E0813 07:12:46.489317 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.490398 kubelet[2490]: W0813 07:12:46.489998 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.490398 kubelet[2490]: E0813 07:12:46.490044 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.491163 kubelet[2490]: E0813 07:12:46.490842 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.491163 kubelet[2490]: W0813 07:12:46.490864 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.491163 kubelet[2490]: E0813 07:12:46.490888 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.492158 kubelet[2490]: E0813 07:12:46.492137 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.492158 kubelet[2490]: W0813 07:12:46.492236 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.493077 kubelet[2490]: E0813 07:12:46.492361 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.494381 kubelet[2490]: E0813 07:12:46.494049 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.494381 kubelet[2490]: W0813 07:12:46.494071 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.494381 kubelet[2490]: E0813 07:12:46.494093 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.496694 kubelet[2490]: E0813 07:12:46.495294 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.496694 kubelet[2490]: W0813 07:12:46.495313 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.496694 kubelet[2490]: E0813 07:12:46.495333 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.496694 kubelet[2490]: E0813 07:12:46.495962 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.496694 kubelet[2490]: W0813 07:12:46.495977 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.496694 kubelet[2490]: E0813 07:12:46.495995 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.497980 kubelet[2490]: E0813 07:12:46.497049 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.497980 kubelet[2490]: W0813 07:12:46.497074 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.497980 kubelet[2490]: E0813 07:12:46.497100 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.497980 kubelet[2490]: E0813 07:12:46.497739 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.497980 kubelet[2490]: W0813 07:12:46.497752 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.497980 kubelet[2490]: E0813 07:12:46.497768 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.498582 kubelet[2490]: E0813 07:12:46.498209 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.498582 kubelet[2490]: W0813 07:12:46.498220 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.498582 kubelet[2490]: E0813 07:12:46.498345 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.513964 kubelet[2490]: E0813 07:12:46.513904 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.513964 kubelet[2490]: W0813 07:12:46.513933 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.513964 kubelet[2490]: E0813 07:12:46.514006 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.514896 kubelet[2490]: E0813 07:12:46.514854 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.514896 kubelet[2490]: W0813 07:12:46.514892 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.515011 kubelet[2490]: E0813 07:12:46.514919 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.515975 kubelet[2490]: E0813 07:12:46.515957 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.515975 kubelet[2490]: W0813 07:12:46.515974 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.516213 kubelet[2490]: E0813 07:12:46.515997 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.516252 kubelet[2490]: E0813 07:12:46.516233 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.516252 kubelet[2490]: W0813 07:12:46.516243 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.516378 kubelet[2490]: E0813 07:12:46.516341 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.517525 kubelet[2490]: E0813 07:12:46.517355 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.517525 kubelet[2490]: W0813 07:12:46.517373 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.517525 kubelet[2490]: E0813 07:12:46.517391 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.517728 kubelet[2490]: E0813 07:12:46.517716 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.518089 kubelet[2490]: W0813 07:12:46.517943 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.518089 kubelet[2490]: E0813 07:12:46.518029 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.518637 kubelet[2490]: E0813 07:12:46.518610 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.518637 kubelet[2490]: W0813 07:12:46.518623 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.519195 kubelet[2490]: E0813 07:12:46.518906 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.519665 kubelet[2490]: E0813 07:12:46.519463 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.519665 kubelet[2490]: W0813 07:12:46.519476 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.519665 kubelet[2490]: E0813 07:12:46.519491 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.519886 kubelet[2490]: E0813 07:12:46.519844 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.519886 kubelet[2490]: W0813 07:12:46.519872 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.520167 kubelet[2490]: E0813 07:12:46.520035 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.520420 kubelet[2490]: E0813 07:12:46.520394 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.520420 kubelet[2490]: W0813 07:12:46.520407 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.520673 kubelet[2490]: E0813 07:12:46.520564 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.520809 kubelet[2490]: E0813 07:12:46.520792 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.520972 kubelet[2490]: W0813 07:12:46.520866 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.521069 kubelet[2490]: E0813 07:12:46.521031 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.521843 kubelet[2490]: E0813 07:12:46.521693 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.521843 kubelet[2490]: W0813 07:12:46.521707 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.521843 kubelet[2490]: E0813 07:12:46.521722 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.522042 kubelet[2490]: E0813 07:12:46.522025 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.522102 kubelet[2490]: W0813 07:12:46.522092 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.522477 kubelet[2490]: E0813 07:12:46.522204 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.522781 kubelet[2490]: E0813 07:12:46.522768 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.522881 kubelet[2490]: W0813 07:12:46.522868 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.522971 kubelet[2490]: E0813 07:12:46.522960 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.523281 kubelet[2490]: E0813 07:12:46.523268 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.523367 kubelet[2490]: W0813 07:12:46.523356 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.523473 kubelet[2490]: E0813 07:12:46.523423 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.523773 kubelet[2490]: E0813 07:12:46.523717 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.523773 kubelet[2490]: W0813 07:12:46.523728 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.524277 kubelet[2490]: E0813 07:12:46.523807 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.524493 kubelet[2490]: E0813 07:12:46.524480 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.524560 kubelet[2490]: W0813 07:12:46.524550 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.524616 kubelet[2490]: E0813 07:12:46.524607 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:46.524901 kubelet[2490]: E0813 07:12:46.524889 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:46.524980 kubelet[2490]: W0813 07:12:46.524969 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:46.525027 kubelet[2490]: E0813 07:12:46.525019 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.285206 kubelet[2490]: E0813 07:12:47.285094 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:47.446609 kubelet[2490]: I0813 07:12:47.446487 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:12:47.447162 kubelet[2490]: E0813 07:12:47.446904 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:47.505881 kubelet[2490]: E0813 07:12:47.505756 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.505881 kubelet[2490]: W0813 07:12:47.505806 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.505881 kubelet[2490]: E0813 07:12:47.505832 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.507146 kubelet[2490]: E0813 07:12:47.506966 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.507146 kubelet[2490]: W0813 07:12:47.507003 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.507146 kubelet[2490]: E0813 07:12:47.507023 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.507786 kubelet[2490]: E0813 07:12:47.507559 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.507786 kubelet[2490]: W0813 07:12:47.507573 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.507786 kubelet[2490]: E0813 07:12:47.507586 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.508263 kubelet[2490]: E0813 07:12:47.508099 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.508263 kubelet[2490]: W0813 07:12:47.508116 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.508263 kubelet[2490]: E0813 07:12:47.508131 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.508646 kubelet[2490]: E0813 07:12:47.508455 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.508646 kubelet[2490]: W0813 07:12:47.508468 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.508646 kubelet[2490]: E0813 07:12:47.508479 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.508798 kubelet[2490]: E0813 07:12:47.508789 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.508829 kubelet[2490]: W0813 07:12:47.508798 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.508829 kubelet[2490]: E0813 07:12:47.508808 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.509371 kubelet[2490]: E0813 07:12:47.509054 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.509371 kubelet[2490]: W0813 07:12:47.509068 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.509371 kubelet[2490]: E0813 07:12:47.509092 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.509792 kubelet[2490]: E0813 07:12:47.509664 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.509792 kubelet[2490]: W0813 07:12:47.509677 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.509792 kubelet[2490]: E0813 07:12:47.509688 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.509900 kubelet[2490]: E0813 07:12:47.509895 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.509938 kubelet[2490]: W0813 07:12:47.509902 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.509938 kubelet[2490]: E0813 07:12:47.509911 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.510214 kubelet[2490]: E0813 07:12:47.510139 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.510214 kubelet[2490]: W0813 07:12:47.510168 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.510214 kubelet[2490]: E0813 07:12:47.510207 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.510857 kubelet[2490]: E0813 07:12:47.510840 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.510857 kubelet[2490]: W0813 07:12:47.510854 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.511133 kubelet[2490]: E0813 07:12:47.510865 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.511133 kubelet[2490]: E0813 07:12:47.511108 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.511229 kubelet[2490]: W0813 07:12:47.511146 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.511229 kubelet[2490]: E0813 07:12:47.511157 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.513678 kubelet[2490]: E0813 07:12:47.513597 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.513678 kubelet[2490]: W0813 07:12:47.513614 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.513678 kubelet[2490]: E0813 07:12:47.513628 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.513858 kubelet[2490]: E0813 07:12:47.513827 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.513858 kubelet[2490]: W0813 07:12:47.513836 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.513858 kubelet[2490]: E0813 07:12:47.513845 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.514088 kubelet[2490]: E0813 07:12:47.514044 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.514088 kubelet[2490]: W0813 07:12:47.514055 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.514088 kubelet[2490]: E0813 07:12:47.514064 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.527994 kubelet[2490]: E0813 07:12:47.527509 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.527994 kubelet[2490]: W0813 07:12:47.527538 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.527994 kubelet[2490]: E0813 07:12:47.527562 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.528920 kubelet[2490]: E0813 07:12:47.528892 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.529297 kubelet[2490]: W0813 07:12:47.529040 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.529297 kubelet[2490]: E0813 07:12:47.529078 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.530002 kubelet[2490]: E0813 07:12:47.529980 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.530508 kubelet[2490]: W0813 07:12:47.530224 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.530784 kubelet[2490]: E0813 07:12:47.530757 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.531162 kubelet[2490]: E0813 07:12:47.531145 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.531228 kubelet[2490]: W0813 07:12:47.531163 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.531441 kubelet[2490]: E0813 07:12:47.531421 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.531995 kubelet[2490]: E0813 07:12:47.531972 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.531995 kubelet[2490]: W0813 07:12:47.531991 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.532315 kubelet[2490]: E0813 07:12:47.532200 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.533438 kubelet[2490]: E0813 07:12:47.533421 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.533597 kubelet[2490]: W0813 07:12:47.533501 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.533849 kubelet[2490]: E0813 07:12:47.533791 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.534022 kubelet[2490]: E0813 07:12:47.533941 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.534022 kubelet[2490]: W0813 07:12:47.533952 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.534022 kubelet[2490]: E0813 07:12:47.533996 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.534702 kubelet[2490]: E0813 07:12:47.534628 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.534702 kubelet[2490]: W0813 07:12:47.534641 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.534855 kubelet[2490]: E0813 07:12:47.534775 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.535379 kubelet[2490]: E0813 07:12:47.535200 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.535379 kubelet[2490]: W0813 07:12:47.535214 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.535379 kubelet[2490]: E0813 07:12:47.535229 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.536253 kubelet[2490]: E0813 07:12:47.535823 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.536253 kubelet[2490]: W0813 07:12:47.535860 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.536253 kubelet[2490]: E0813 07:12:47.535897 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.537157 kubelet[2490]: E0813 07:12:47.536716 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.537157 kubelet[2490]: W0813 07:12:47.536805 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.537446 kubelet[2490]: E0813 07:12:47.537425 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.537801 kubelet[2490]: E0813 07:12:47.537680 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.537801 kubelet[2490]: W0813 07:12:47.537693 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.538435 kubelet[2490]: E0813 07:12:47.538072 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.539193 kubelet[2490]: E0813 07:12:47.539020 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.539193 kubelet[2490]: W0813 07:12:47.539037 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.540066 kubelet[2490]: E0813 07:12:47.539355 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.541421 kubelet[2490]: E0813 07:12:47.541025 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.541421 kubelet[2490]: W0813 07:12:47.541052 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.541421 kubelet[2490]: E0813 07:12:47.541344 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.541421 kubelet[2490]: W0813 07:12:47.541355 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.541585 kubelet[2490]: E0813 07:12:47.541514 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.541585 kubelet[2490]: W0813 07:12:47.541526 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.541585 kubelet[2490]: E0813 07:12:47.541538 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.542407 kubelet[2490]: E0813 07:12:47.541743 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.542497 kubelet[2490]: E0813 07:12:47.542464 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.542497 kubelet[2490]: W0813 07:12:47.542484 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.542559 kubelet[2490]: E0813 07:12:47.542502 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.543043 kubelet[2490]: E0813 07:12:47.542624 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.544813 kubelet[2490]: E0813 07:12:47.544771 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:12:47.544813 kubelet[2490]: W0813 07:12:47.544800 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:12:47.545236 kubelet[2490]: E0813 07:12:47.544836 2490 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:12:47.550133 containerd[1469]: time="2025-08-13T07:12:47.549706914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:47.554209 containerd[1469]: time="2025-08-13T07:12:47.553701627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 07:12:47.554209 containerd[1469]: time="2025-08-13T07:12:47.553828454Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:47.557418 containerd[1469]: time="2025-08-13T07:12:47.557336709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:47.557859 containerd[1469]: time="2025-08-13T07:12:47.557817697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.451711878s" Aug 13 07:12:47.557966 containerd[1469]: time="2025-08-13T07:12:47.557864040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:12:47.562589 containerd[1469]: time="2025-08-13T07:12:47.562550043Z" level=info msg="CreateContainer within sandbox \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:12:47.580203 containerd[1469]: time="2025-08-13T07:12:47.580053058Z" level=info msg="CreateContainer within sandbox \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe\"" Aug 13 07:12:47.581318 containerd[1469]: time="2025-08-13T07:12:47.581284533Z" level=info msg="StartContainer for \"712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe\"" Aug 13 07:12:47.624479 systemd[1]: Started cri-containerd-712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe.scope - libcontainer container 712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe. Aug 13 07:12:47.666370 containerd[1469]: time="2025-08-13T07:12:47.666295817Z" level=info msg="StartContainer for \"712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe\" returns successfully" Aug 13 07:12:47.681711 systemd[1]: cri-containerd-712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe.scope: Deactivated successfully. Aug 13 07:12:47.711332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe-rootfs.mount: Deactivated successfully. Aug 13 07:12:47.735238 containerd[1469]: time="2025-08-13T07:12:47.716113730Z" level=info msg="shim disconnected" id=712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe namespace=k8s.io Aug 13 07:12:47.735437 containerd[1469]: time="2025-08-13T07:12:47.735244559Z" level=warning msg="cleaning up after shim disconnected" id=712e12105ed974798def4bd3f735cec878b4dd915fc5fee6e2e56d35926c7ebe namespace=k8s.io Aug 13 07:12:47.735437 containerd[1469]: time="2025-08-13T07:12:47.735265505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:48.453662 containerd[1469]: time="2025-08-13T07:12:48.453584848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:12:48.484604 kubelet[2490]: I0813 07:12:48.483653 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-865799d9d7-759wc" podStartSLOduration=3.523531733 podStartE2EDuration="6.483628841s" podCreationTimestamp="2025-08-13 07:12:42 +0000 UTC" firstStartedPulling="2025-08-13 07:12:43.144747317 +0000 UTC m=+23.085206161" lastFinishedPulling="2025-08-13 07:12:46.104844437 +0000 UTC m=+26.045303269" observedRunningTime="2025-08-13 07:12:46.4929923 +0000 UTC m=+26.433451154" watchObservedRunningTime="2025-08-13 07:12:48.483628841 +0000 UTC m=+28.424087722" Aug 13 07:12:49.285759 kubelet[2490]: E0813 07:12:49.285535 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:51.285239 kubelet[2490]: E0813 07:12:51.284830 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:52.529981 containerd[1469]: time="2025-08-13T07:12:52.529891971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:52.530971 containerd[1469]: time="2025-08-13T07:12:52.530633613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:12:52.531774 containerd[1469]: time="2025-08-13T07:12:52.531478907Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:52.533805 containerd[1469]: time="2025-08-13T07:12:52.533766451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:12:52.534878 containerd[1469]: time="2025-08-13T07:12:52.534761119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.081093572s" Aug 13 07:12:52.535003 containerd[1469]: time="2025-08-13T07:12:52.534983991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:12:52.539004 containerd[1469]: time="2025-08-13T07:12:52.538854277Z" level=info msg="CreateContainer within sandbox \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:12:52.568856 containerd[1469]: time="2025-08-13T07:12:52.568748653Z" level=info msg="CreateContainer within sandbox \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868\"" Aug 13 07:12:52.571276 containerd[1469]: time="2025-08-13T07:12:52.570168281Z" level=info msg="StartContainer for \"be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868\"" Aug 13 07:12:52.648523 systemd[1]: Started cri-containerd-be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868.scope - libcontainer container be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868. Aug 13 07:12:52.691656 containerd[1469]: time="2025-08-13T07:12:52.691597159Z" level=info msg="StartContainer for \"be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868\" returns successfully" Aug 13 07:12:53.285702 kubelet[2490]: E0813 07:12:53.285643 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:53.343314 systemd[1]: cri-containerd-be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868.scope: Deactivated successfully. Aug 13 07:12:53.384460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868-rootfs.mount: Deactivated successfully. Aug 13 07:12:53.388168 containerd[1469]: time="2025-08-13T07:12:53.387575815Z" level=info msg="shim disconnected" id=be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868 namespace=k8s.io Aug 13 07:12:53.388168 containerd[1469]: time="2025-08-13T07:12:53.387760104Z" level=warning msg="cleaning up after shim disconnected" id=be9d9726bd1249b2609d253239b1603ac2b0476ef989ceffe2954b26966fc868 namespace=k8s.io Aug 13 07:12:53.388168 containerd[1469]: time="2025-08-13T07:12:53.387775890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:12:53.404635 kubelet[2490]: I0813 07:12:53.404561 2490 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:12:53.463727 systemd[1]: Created slice kubepods-burstable-pod7e60244f_7e5f_41c3_a2a5_c9680ee5ed60.slice - libcontainer container kubepods-burstable-pod7e60244f_7e5f_41c3_a2a5_c9680ee5ed60.slice. Aug 13 07:12:53.480903 kubelet[2490]: I0813 07:12:53.479454 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqgpl\" (UniqueName: \"kubernetes.io/projected/7e60244f-7e5f-41c3-a2a5-c9680ee5ed60-kube-api-access-lqgpl\") pod \"coredns-7c65d6cfc9-gpfnk\" (UID: \"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60\") " pod="kube-system/coredns-7c65d6cfc9-gpfnk" Aug 13 07:12:53.480903 kubelet[2490]: I0813 07:12:53.479528 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76bc530d-8815-4ca7-9b4b-bf84496266e1-calico-apiserver-certs\") pod \"calico-apiserver-8bc87ff86-z99t9\" (UID: \"76bc530d-8815-4ca7-9b4b-bf84496266e1\") " pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" Aug 13 07:12:53.480903 kubelet[2490]: I0813 07:12:53.479560 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-backend-key-pair\") pod \"whisker-8dd745b58-c25pn\" (UID: \"6c49a143-9edc-4cb7-9608-17443dbfdb59\") " pod="calico-system/whisker-8dd745b58-c25pn" Aug 13 07:12:53.480903 kubelet[2490]: I0813 07:12:53.479590 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-ca-bundle\") pod \"whisker-8dd745b58-c25pn\" (UID: \"6c49a143-9edc-4cb7-9608-17443dbfdb59\") " pod="calico-system/whisker-8dd745b58-c25pn" Aug 13 07:12:53.480903 kubelet[2490]: I0813 07:12:53.479619 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e60244f-7e5f-41c3-a2a5-c9680ee5ed60-config-volume\") pod \"coredns-7c65d6cfc9-gpfnk\" (UID: \"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60\") " pod="kube-system/coredns-7c65d6cfc9-gpfnk" Aug 13 07:12:53.481317 kubelet[2490]: I0813 07:12:53.479648 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7kh\" (UniqueName: \"kubernetes.io/projected/76bc530d-8815-4ca7-9b4b-bf84496266e1-kube-api-access-wq7kh\") pod \"calico-apiserver-8bc87ff86-z99t9\" (UID: \"76bc530d-8815-4ca7-9b4b-bf84496266e1\") " pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" Aug 13 07:12:53.481317 kubelet[2490]: I0813 07:12:53.479675 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bac3cbef-457c-46f3-9f40-8d0637f6d8c5-goldmane-key-pair\") pod \"goldmane-58fd7646b9-svvdd\" (UID: \"bac3cbef-457c-46f3-9f40-8d0637f6d8c5\") " pod="calico-system/goldmane-58fd7646b9-svvdd" Aug 13 07:12:53.481317 kubelet[2490]: I0813 07:12:53.479701 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q2gl\" (UniqueName: \"kubernetes.io/projected/6c49a143-9edc-4cb7-9608-17443dbfdb59-kube-api-access-2q2gl\") pod \"whisker-8dd745b58-c25pn\" (UID: \"6c49a143-9edc-4cb7-9608-17443dbfdb59\") " pod="calico-system/whisker-8dd745b58-c25pn" Aug 13 07:12:53.481317 kubelet[2490]: I0813 07:12:53.479726 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bac3cbef-457c-46f3-9f40-8d0637f6d8c5-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-svvdd\" (UID: \"bac3cbef-457c-46f3-9f40-8d0637f6d8c5\") " pod="calico-system/goldmane-58fd7646b9-svvdd" Aug 13 07:12:53.481317 kubelet[2490]: I0813 07:12:53.479752 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d1571e3-4e3e-4f96-8193-1271f6b023c0-calico-apiserver-certs\") pod \"calico-apiserver-8bc87ff86-87hxl\" (UID: \"2d1571e3-4e3e-4f96-8193-1271f6b023c0\") " pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" Aug 13 07:12:53.481562 kubelet[2490]: I0813 07:12:53.479777 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxprn\" (UniqueName: \"kubernetes.io/projected/2d1571e3-4e3e-4f96-8193-1271f6b023c0-kube-api-access-jxprn\") pod \"calico-apiserver-8bc87ff86-87hxl\" (UID: \"2d1571e3-4e3e-4f96-8193-1271f6b023c0\") " pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" Aug 13 07:12:53.481562 kubelet[2490]: I0813 07:12:53.479808 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60505575-c00e-488e-b314-2e2a9dc9b111-tigera-ca-bundle\") pod \"calico-kube-controllers-6d686c46f5-79swt\" (UID: \"60505575-c00e-488e-b314-2e2a9dc9b111\") " pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" Aug 13 07:12:53.481562 kubelet[2490]: I0813 07:12:53.479835 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx8bp\" (UniqueName: \"kubernetes.io/projected/60505575-c00e-488e-b314-2e2a9dc9b111-kube-api-access-rx8bp\") pod \"calico-kube-controllers-6d686c46f5-79swt\" (UID: \"60505575-c00e-488e-b314-2e2a9dc9b111\") " pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" Aug 13 07:12:53.481562 kubelet[2490]: I0813 07:12:53.479867 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q662g\" (UniqueName: \"kubernetes.io/projected/bac3cbef-457c-46f3-9f40-8d0637f6d8c5-kube-api-access-q662g\") pod \"goldmane-58fd7646b9-svvdd\" (UID: \"bac3cbef-457c-46f3-9f40-8d0637f6d8c5\") " pod="calico-system/goldmane-58fd7646b9-svvdd" Aug 13 07:12:53.481562 kubelet[2490]: I0813 07:12:53.479896 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc80eeef-1a82-4810-afd6-bf52e91a865f-config-volume\") pod \"coredns-7c65d6cfc9-rjt66\" (UID: \"bc80eeef-1a82-4810-afd6-bf52e91a865f\") " pod="kube-system/coredns-7c65d6cfc9-rjt66" Aug 13 07:12:53.481799 kubelet[2490]: I0813 07:12:53.479921 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tc96\" (UniqueName: \"kubernetes.io/projected/bc80eeef-1a82-4810-afd6-bf52e91a865f-kube-api-access-6tc96\") pod \"coredns-7c65d6cfc9-rjt66\" (UID: \"bc80eeef-1a82-4810-afd6-bf52e91a865f\") " pod="kube-system/coredns-7c65d6cfc9-rjt66" Aug 13 07:12:53.481799 kubelet[2490]: I0813 07:12:53.479957 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac3cbef-457c-46f3-9f40-8d0637f6d8c5-config\") pod \"goldmane-58fd7646b9-svvdd\" (UID: \"bac3cbef-457c-46f3-9f40-8d0637f6d8c5\") " pod="calico-system/goldmane-58fd7646b9-svvdd" Aug 13 07:12:53.487722 kubelet[2490]: W0813 07:12:53.483827 2490 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.5-a-2a2ab8bcea" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-a-2a2ab8bcea' and this object Aug 13 07:12:53.487722 kubelet[2490]: E0813 07:12:53.483868 2490 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.5-a-2a2ab8bcea\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-a-2a2ab8bcea' and this object" logger="UnhandledError" Aug 13 07:12:53.487722 kubelet[2490]: W0813 07:12:53.483956 2490 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4081.3.5-a-2a2ab8bcea" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.5-a-2a2ab8bcea' and this object Aug 13 07:12:53.487722 kubelet[2490]: E0813 07:12:53.483969 2490 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.5-a-2a2ab8bcea\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.5-a-2a2ab8bcea' and this object" logger="UnhandledError" Aug 13 07:12:53.489809 systemd[1]: Created slice kubepods-burstable-podbc80eeef_1a82_4810_afd6_bf52e91a865f.slice - libcontainer container kubepods-burstable-podbc80eeef_1a82_4810_afd6_bf52e91a865f.slice. Aug 13 07:12:53.514114 systemd[1]: Created slice kubepods-besteffort-pod60505575_c00e_488e_b314_2e2a9dc9b111.slice - libcontainer container kubepods-besteffort-pod60505575_c00e_488e_b314_2e2a9dc9b111.slice. Aug 13 07:12:53.515593 containerd[1469]: time="2025-08-13T07:12:53.515492514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:12:53.530495 systemd[1]: Created slice kubepods-besteffort-pod2d1571e3_4e3e_4f96_8193_1271f6b023c0.slice - libcontainer container kubepods-besteffort-pod2d1571e3_4e3e_4f96_8193_1271f6b023c0.slice. Aug 13 07:12:53.543455 systemd[1]: Created slice kubepods-besteffort-pod6c49a143_9edc_4cb7_9608_17443dbfdb59.slice - libcontainer container kubepods-besteffort-pod6c49a143_9edc_4cb7_9608_17443dbfdb59.slice. Aug 13 07:12:53.558596 systemd[1]: Created slice kubepods-besteffort-pod76bc530d_8815_4ca7_9b4b_bf84496266e1.slice - libcontainer container kubepods-besteffort-pod76bc530d_8815_4ca7_9b4b_bf84496266e1.slice. Aug 13 07:12:53.570272 systemd[1]: Created slice kubepods-besteffort-podbac3cbef_457c_46f3_9f40_8d0637f6d8c5.slice - libcontainer container kubepods-besteffort-podbac3cbef_457c_46f3_9f40_8d0637f6d8c5.slice. Aug 13 07:12:53.774596 kubelet[2490]: E0813 07:12:53.774537 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:53.775861 containerd[1469]: time="2025-08-13T07:12:53.775418500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpfnk,Uid:7e60244f-7e5f-41c3-a2a5-c9680ee5ed60,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:53.804431 kubelet[2490]: E0813 07:12:53.804310 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:12:53.807581 containerd[1469]: time="2025-08-13T07:12:53.805591668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rjt66,Uid:bc80eeef-1a82-4810-afd6-bf52e91a865f,Namespace:kube-system,Attempt:0,}" Aug 13 07:12:53.829162 containerd[1469]: time="2025-08-13T07:12:53.829110591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d686c46f5-79swt,Uid:60505575-c00e-488e-b314-2e2a9dc9b111,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:53.844417 containerd[1469]: time="2025-08-13T07:12:53.844137128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-87hxl,Uid:2d1571e3-4e3e-4f96-8193-1271f6b023c0,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:12:53.889899 containerd[1469]: time="2025-08-13T07:12:53.889375323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-svvdd,Uid:bac3cbef-457c-46f3-9f40-8d0637f6d8c5,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:53.897734 containerd[1469]: time="2025-08-13T07:12:53.897336237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-z99t9,Uid:76bc530d-8815-4ca7-9b4b-bf84496266e1,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:12:54.186635 containerd[1469]: time="2025-08-13T07:12:54.186572092Z" level=error msg="Failed to destroy network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.191019 containerd[1469]: time="2025-08-13T07:12:54.190957540Z" level=error msg="encountered an error cleaning up failed sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.191364 containerd[1469]: time="2025-08-13T07:12:54.191335020Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-z99t9,Uid:76bc530d-8815-4ca7-9b4b-bf84496266e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.193919 containerd[1469]: time="2025-08-13T07:12:54.191277753Z" level=error msg="Failed to destroy network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.193919 containerd[1469]: time="2025-08-13T07:12:54.193863347Z" level=error msg="encountered an error cleaning up failed sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.194074 containerd[1469]: time="2025-08-13T07:12:54.193937379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-svvdd,Uid:bac3cbef-457c-46f3-9f40-8d0637f6d8c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.201084 kubelet[2490]: E0813 07:12:54.200643 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.201084 kubelet[2490]: E0813 07:12:54.200728 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-svvdd" Aug 13 07:12:54.201084 kubelet[2490]: E0813 07:12:54.200755 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-svvdd" Aug 13 07:12:54.201373 kubelet[2490]: E0813 07:12:54.200797 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-svvdd_calico-system(bac3cbef-457c-46f3-9f40-8d0637f6d8c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-svvdd_calico-system(bac3cbef-457c-46f3-9f40-8d0637f6d8c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-svvdd" podUID="bac3cbef-457c-46f3-9f40-8d0637f6d8c5" Aug 13 07:12:54.202192 kubelet[2490]: E0813 07:12:54.201746 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.202192 kubelet[2490]: E0813 07:12:54.201812 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" Aug 13 07:12:54.202192 kubelet[2490]: E0813 07:12:54.201832 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" Aug 13 07:12:54.202367 kubelet[2490]: E0813 07:12:54.201871 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bc87ff86-z99t9_calico-apiserver(76bc530d-8815-4ca7-9b4b-bf84496266e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bc87ff86-z99t9_calico-apiserver(76bc530d-8815-4ca7-9b4b-bf84496266e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" podUID="76bc530d-8815-4ca7-9b4b-bf84496266e1" Aug 13 07:12:54.222762 containerd[1469]: time="2025-08-13T07:12:54.222593582Z" level=error msg="Failed to destroy network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.224714 containerd[1469]: time="2025-08-13T07:12:54.224576709Z" level=error msg="encountered an error cleaning up failed sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.224714 containerd[1469]: time="2025-08-13T07:12:54.224664206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rjt66,Uid:bc80eeef-1a82-4810-afd6-bf52e91a865f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.225360 kubelet[2490]: E0813 07:12:54.225259 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.225360 kubelet[2490]: E0813 07:12:54.225338 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rjt66" Aug 13 07:12:54.225360 kubelet[2490]: E0813 07:12:54.225365 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rjt66" Aug 13 07:12:54.225700 kubelet[2490]: E0813 07:12:54.225645 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rjt66_kube-system(bc80eeef-1a82-4810-afd6-bf52e91a865f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rjt66_kube-system(bc80eeef-1a82-4810-afd6-bf52e91a865f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rjt66" podUID="bc80eeef-1a82-4810-afd6-bf52e91a865f" Aug 13 07:12:54.226717 containerd[1469]: time="2025-08-13T07:12:54.226639047Z" level=error msg="Failed to destroy network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.227541 containerd[1469]: time="2025-08-13T07:12:54.227430331Z" level=error msg="encountered an error cleaning up failed sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.228133 containerd[1469]: time="2025-08-13T07:12:54.228068794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-87hxl,Uid:2d1571e3-4e3e-4f96-8193-1271f6b023c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.228560 kubelet[2490]: E0813 07:12:54.228476 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.228560 kubelet[2490]: E0813 07:12:54.228533 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" Aug 13 07:12:54.228560 kubelet[2490]: E0813 07:12:54.228553 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" Aug 13 07:12:54.228680 kubelet[2490]: E0813 07:12:54.228598 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bc87ff86-87hxl_calico-apiserver(2d1571e3-4e3e-4f96-8193-1271f6b023c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bc87ff86-87hxl_calico-apiserver(2d1571e3-4e3e-4f96-8193-1271f6b023c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" podUID="2d1571e3-4e3e-4f96-8193-1271f6b023c0" Aug 13 07:12:54.232820 containerd[1469]: time="2025-08-13T07:12:54.232670666Z" level=error msg="Failed to destroy network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.235114 containerd[1469]: time="2025-08-13T07:12:54.234866806Z" level=error msg="encountered an error cleaning up failed sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.235114 containerd[1469]: time="2025-08-13T07:12:54.234961813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpfnk,Uid:7e60244f-7e5f-41c3-a2a5-c9680ee5ed60,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.236231 kubelet[2490]: E0813 07:12:54.235494 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.236231 kubelet[2490]: E0813 07:12:54.235555 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gpfnk" Aug 13 07:12:54.236231 kubelet[2490]: E0813 07:12:54.235575 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gpfnk" Aug 13 07:12:54.236407 kubelet[2490]: E0813 07:12:54.235620 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gpfnk_kube-system(7e60244f-7e5f-41c3-a2a5-c9680ee5ed60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gpfnk_kube-system(7e60244f-7e5f-41c3-a2a5-c9680ee5ed60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gpfnk" podUID="7e60244f-7e5f-41c3-a2a5-c9680ee5ed60" Aug 13 07:12:54.241553 containerd[1469]: time="2025-08-13T07:12:54.241501336Z" level=error msg="Failed to destroy network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.242344 containerd[1469]: time="2025-08-13T07:12:54.242299292Z" level=error msg="encountered an error cleaning up failed sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.242447 containerd[1469]: time="2025-08-13T07:12:54.242383184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d686c46f5-79swt,Uid:60505575-c00e-488e-b314-2e2a9dc9b111,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.242637 kubelet[2490]: E0813 07:12:54.242598 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.242698 kubelet[2490]: E0813 07:12:54.242663 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" Aug 13 07:12:54.242698 kubelet[2490]: E0813 07:12:54.242683 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" Aug 13 07:12:54.242832 kubelet[2490]: E0813 07:12:54.242730 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d686c46f5-79swt_calico-system(60505575-c00e-488e-b314-2e2a9dc9b111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d686c46f5-79swt_calico-system(60505575-c00e-488e-b314-2e2a9dc9b111)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" podUID="60505575-c00e-488e-b314-2e2a9dc9b111" Aug 13 07:12:54.513439 kubelet[2490]: I0813 07:12:54.512057 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:12:54.517031 kubelet[2490]: I0813 07:12:54.516083 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:12:54.520751 kubelet[2490]: I0813 07:12:54.520030 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:12:54.524499 containerd[1469]: time="2025-08-13T07:12:54.523381907Z" level=info msg="StopPodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\"" Aug 13 07:12:54.527027 containerd[1469]: time="2025-08-13T07:12:54.525168684Z" level=info msg="StopPodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\"" Aug 13 07:12:54.527027 containerd[1469]: time="2025-08-13T07:12:54.525741468Z" level=info msg="Ensure that sandbox 7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055 in task-service has been cleanup successfully" Aug 13 07:12:54.527402 containerd[1469]: time="2025-08-13T07:12:54.527363526Z" level=info msg="Ensure that sandbox c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b in task-service has been cleanup successfully" Aug 13 07:12:54.528021 containerd[1469]: time="2025-08-13T07:12:54.527961823Z" level=info msg="StopPodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\"" Aug 13 07:12:54.528517 containerd[1469]: time="2025-08-13T07:12:54.528206241Z" level=info msg="Ensure that sandbox b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f in task-service has been cleanup successfully" Aug 13 07:12:54.533269 kubelet[2490]: I0813 07:12:54.533016 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:12:54.535977 kubelet[2490]: I0813 07:12:54.535391 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:12:54.536575 containerd[1469]: time="2025-08-13T07:12:54.536514741Z" level=info msg="StopPodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\"" Aug 13 07:12:54.536808 containerd[1469]: time="2025-08-13T07:12:54.536781213Z" level=info msg="StopPodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\"" Aug 13 07:12:54.537298 containerd[1469]: time="2025-08-13T07:12:54.537248970Z" level=info msg="Ensure that sandbox 9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7 in task-service has been cleanup successfully" Aug 13 07:12:54.538022 containerd[1469]: time="2025-08-13T07:12:54.537991941Z" level=info msg="Ensure that sandbox 9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277 in task-service has been cleanup successfully" Aug 13 07:12:54.546906 kubelet[2490]: I0813 07:12:54.546721 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:12:54.548524 containerd[1469]: time="2025-08-13T07:12:54.547558368Z" level=info msg="StopPodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\"" Aug 13 07:12:54.548662 containerd[1469]: time="2025-08-13T07:12:54.548616950Z" level=info msg="Ensure that sandbox 9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e in task-service has been cleanup successfully" Aug 13 07:12:54.586243 kubelet[2490]: E0813 07:12:54.586195 2490 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Aug 13 07:12:54.586469 kubelet[2490]: E0813 07:12:54.586314 2490 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-ca-bundle podName:6c49a143-9edc-4cb7-9608-17443dbfdb59 nodeName:}" failed. No retries permitted until 2025-08-13 07:12:55.086291544 +0000 UTC m=+35.026750375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-ca-bundle") pod "whisker-8dd745b58-c25pn" (UID: "6c49a143-9edc-4cb7-9608-17443dbfdb59") : failed to sync configmap cache: timed out waiting for the condition Aug 13 07:12:54.638413 containerd[1469]: time="2025-08-13T07:12:54.638358046Z" level=error msg="StopPodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" failed" error="failed to destroy network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.638952 kubelet[2490]: E0813 07:12:54.638898 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:12:54.639074 kubelet[2490]: E0813 07:12:54.638968 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277"} Aug 13 07:12:54.639074 kubelet[2490]: E0813 07:12:54.639048 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc80eeef-1a82-4810-afd6-bf52e91a865f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:54.639211 kubelet[2490]: E0813 07:12:54.639073 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc80eeef-1a82-4810-afd6-bf52e91a865f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rjt66" podUID="bc80eeef-1a82-4810-afd6-bf52e91a865f" Aug 13 07:12:54.677080 containerd[1469]: time="2025-08-13T07:12:54.677026968Z" level=error msg="StopPodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" failed" error="failed to destroy network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.677575 kubelet[2490]: E0813 07:12:54.677518 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:12:54.677683 kubelet[2490]: E0813 07:12:54.677598 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b"} Aug 13 07:12:54.677683 kubelet[2490]: E0813 07:12:54.677645 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bac3cbef-457c-46f3-9f40-8d0637f6d8c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:54.677781 kubelet[2490]: E0813 07:12:54.677677 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bac3cbef-457c-46f3-9f40-8d0637f6d8c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-svvdd" podUID="bac3cbef-457c-46f3-9f40-8d0637f6d8c5" Aug 13 07:12:54.681164 containerd[1469]: time="2025-08-13T07:12:54.681103638Z" level=error msg="StopPodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" failed" error="failed to destroy network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.681671 kubelet[2490]: E0813 07:12:54.681628 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:12:54.681768 kubelet[2490]: E0813 07:12:54.681688 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055"} Aug 13 07:12:54.681768 kubelet[2490]: E0813 07:12:54.681725 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76bc530d-8815-4ca7-9b4b-bf84496266e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:54.681768 kubelet[2490]: E0813 07:12:54.681748 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76bc530d-8815-4ca7-9b4b-bf84496266e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" podUID="76bc530d-8815-4ca7-9b4b-bf84496266e1" Aug 13 07:12:54.685362 containerd[1469]: time="2025-08-13T07:12:54.685317613Z" level=error msg="StopPodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" failed" error="failed to destroy network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.685727 containerd[1469]: time="2025-08-13T07:12:54.685334958Z" level=error msg="StopPodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" failed" error="failed to destroy network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.685919 kubelet[2490]: E0813 07:12:54.685741 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:12:54.685919 kubelet[2490]: E0813 07:12:54.685794 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e"} Aug 13 07:12:54.685919 kubelet[2490]: E0813 07:12:54.685840 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:54.685919 kubelet[2490]: E0813 07:12:54.685865 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gpfnk" podUID="7e60244f-7e5f-41c3-a2a5-c9680ee5ed60" Aug 13 07:12:54.687022 kubelet[2490]: E0813 07:12:54.686937 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:12:54.687022 kubelet[2490]: E0813 07:12:54.686975 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7"} Aug 13 07:12:54.687022 kubelet[2490]: E0813 07:12:54.686999 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60505575-c00e-488e-b314-2e2a9dc9b111\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:54.687266 kubelet[2490]: E0813 07:12:54.687241 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60505575-c00e-488e-b314-2e2a9dc9b111\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" podUID="60505575-c00e-488e-b314-2e2a9dc9b111" Aug 13 07:12:54.687563 containerd[1469]: time="2025-08-13T07:12:54.687108197Z" level=error msg="StopPodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" failed" error="failed to destroy network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:54.687655 kubelet[2490]: E0813 07:12:54.687367 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:12:54.687655 kubelet[2490]: E0813 07:12:54.687404 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f"} Aug 13 07:12:54.687655 kubelet[2490]: E0813 07:12:54.687433 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d1571e3-4e3e-4f96-8193-1271f6b023c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:54.687655 kubelet[2490]: E0813 07:12:54.687453 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d1571e3-4e3e-4f96-8193-1271f6b023c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" podUID="2d1571e3-4e3e-4f96-8193-1271f6b023c0" Aug 13 07:12:55.297423 systemd[1]: Created slice kubepods-besteffort-pod2c51012b_f93a_4143_8128_6925ff4126f6.slice - libcontainer container kubepods-besteffort-pod2c51012b_f93a_4143_8128_6925ff4126f6.slice. Aug 13 07:12:55.302763 containerd[1469]: time="2025-08-13T07:12:55.302715172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6lh,Uid:2c51012b-f93a-4143-8128-6925ff4126f6,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:55.355337 containerd[1469]: time="2025-08-13T07:12:55.355171605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dd745b58-c25pn,Uid:6c49a143-9edc-4cb7-9608-17443dbfdb59,Namespace:calico-system,Attempt:0,}" Aug 13 07:12:55.560211 containerd[1469]: time="2025-08-13T07:12:55.559986767Z" level=error msg="Failed to destroy network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.561385 containerd[1469]: time="2025-08-13T07:12:55.561127299Z" level=error msg="encountered an error cleaning up failed sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.561385 containerd[1469]: time="2025-08-13T07:12:55.561232247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6lh,Uid:2c51012b-f93a-4143-8128-6925ff4126f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.562942 kubelet[2490]: E0813 07:12:55.562882 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.565491 kubelet[2490]: E0813 07:12:55.563641 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:55.565491 kubelet[2490]: E0813 07:12:55.563690 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5l6lh" Aug 13 07:12:55.565491 kubelet[2490]: E0813 07:12:55.564169 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5l6lh_calico-system(2c51012b-f93a-4143-8128-6925ff4126f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5l6lh_calico-system(2c51012b-f93a-4143-8128-6925ff4126f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:55.564777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87-shm.mount: Deactivated successfully. Aug 13 07:12:55.585538 containerd[1469]: time="2025-08-13T07:12:55.585469563Z" level=error msg="Failed to destroy network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.586211 containerd[1469]: time="2025-08-13T07:12:55.586146515Z" level=error msg="encountered an error cleaning up failed sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.586457 containerd[1469]: time="2025-08-13T07:12:55.586410041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dd745b58-c25pn,Uid:6c49a143-9edc-4cb7-9608-17443dbfdb59,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.587259 kubelet[2490]: E0813 07:12:55.586726 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:55.587259 kubelet[2490]: E0813 07:12:55.586813 2490 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8dd745b58-c25pn" Aug 13 07:12:55.587259 kubelet[2490]: E0813 07:12:55.586863 2490 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8dd745b58-c25pn" Aug 13 07:12:55.587527 kubelet[2490]: E0813 07:12:55.586936 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8dd745b58-c25pn_calico-system(6c49a143-9edc-4cb7-9608-17443dbfdb59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8dd745b58-c25pn_calico-system(6c49a143-9edc-4cb7-9608-17443dbfdb59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8dd745b58-c25pn" podUID="6c49a143-9edc-4cb7-9608-17443dbfdb59" Aug 13 07:12:55.614630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca-shm.mount: Deactivated successfully. Aug 13 07:12:56.553547 kubelet[2490]: I0813 07:12:56.553501 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:12:56.554427 containerd[1469]: time="2025-08-13T07:12:56.554383739Z" level=info msg="StopPodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\"" Aug 13 07:12:56.557222 containerd[1469]: time="2025-08-13T07:12:56.555530693Z" level=info msg="Ensure that sandbox 2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87 in task-service has been cleanup successfully" Aug 13 07:12:56.557858 kubelet[2490]: I0813 07:12:56.557823 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:12:56.559251 containerd[1469]: time="2025-08-13T07:12:56.559223815Z" level=info msg="StopPodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\"" Aug 13 07:12:56.559634 containerd[1469]: time="2025-08-13T07:12:56.559601313Z" level=info msg="Ensure that sandbox a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca in task-service has been cleanup successfully" Aug 13 07:12:56.608266 containerd[1469]: time="2025-08-13T07:12:56.608215450Z" level=error msg="StopPodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" failed" error="failed to destroy network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:56.608757 kubelet[2490]: E0813 07:12:56.608708 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:12:56.609351 kubelet[2490]: E0813 07:12:56.608778 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87"} Aug 13 07:12:56.609351 kubelet[2490]: E0813 07:12:56.608897 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c51012b-f93a-4143-8128-6925ff4126f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:56.609351 kubelet[2490]: E0813 07:12:56.608933 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c51012b-f93a-4143-8128-6925ff4126f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5l6lh" podUID="2c51012b-f93a-4143-8128-6925ff4126f6" Aug 13 07:12:56.628910 containerd[1469]: time="2025-08-13T07:12:56.628846508Z" level=error msg="StopPodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" failed" error="failed to destroy network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:12:56.629246 kubelet[2490]: E0813 07:12:56.629155 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:12:56.629319 kubelet[2490]: E0813 07:12:56.629263 2490 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca"} Aug 13 07:12:56.629356 kubelet[2490]: E0813 07:12:56.629320 2490 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c49a143-9edc-4cb7-9608-17443dbfdb59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:12:56.629450 kubelet[2490]: E0813 07:12:56.629360 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c49a143-9edc-4cb7-9608-17443dbfdb59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8dd745b58-c25pn" podUID="6c49a143-9edc-4cb7-9608-17443dbfdb59" Aug 13 07:13:02.081061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146732590.mount: Deactivated successfully. Aug 13 07:13:02.205206 containerd[1469]: time="2025-08-13T07:13:02.158383711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:13:02.244464 containerd[1469]: time="2025-08-13T07:13:02.244367902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:02.277093 containerd[1469]: time="2025-08-13T07:13:02.276951891Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:02.279532 containerd[1469]: time="2025-08-13T07:13:02.279426235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:02.291107 containerd[1469]: time="2025-08-13T07:13:02.290693371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.763498239s" Aug 13 07:13:02.291107 containerd[1469]: time="2025-08-13T07:13:02.290891066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:13:02.320643 containerd[1469]: time="2025-08-13T07:13:02.320565186Z" level=info msg="CreateContainer within sandbox \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:13:02.397552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017082933.mount: Deactivated successfully. Aug 13 07:13:02.432106 containerd[1469]: time="2025-08-13T07:13:02.432048553Z" level=info msg="CreateContainer within sandbox \"a0f76640e3ab42d89ab46a7b832dfda0dbe326d5198f1d80d4d1c1351bae0a9e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a0f2ca945260b59987bf6671873abb9181e1a83be1e534b7dc7201acb78b29d\"" Aug 13 07:13:02.433047 containerd[1469]: time="2025-08-13T07:13:02.432965946Z" level=info msg="StartContainer for \"2a0f2ca945260b59987bf6671873abb9181e1a83be1e534b7dc7201acb78b29d\"" Aug 13 07:13:02.608539 systemd[1]: Started cri-containerd-2a0f2ca945260b59987bf6671873abb9181e1a83be1e534b7dc7201acb78b29d.scope - libcontainer container 2a0f2ca945260b59987bf6671873abb9181e1a83be1e534b7dc7201acb78b29d. Aug 13 07:13:02.679249 containerd[1469]: time="2025-08-13T07:13:02.678541261Z" level=info msg="StartContainer for \"2a0f2ca945260b59987bf6671873abb9181e1a83be1e534b7dc7201acb78b29d\" returns successfully" Aug 13 07:13:02.848773 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:13:02.848967 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:13:03.115067 containerd[1469]: time="2025-08-13T07:13:03.114916707Z" level=info msg="StopPodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\"" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.228 [INFO][3729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.229 [INFO][3729] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" iface="eth0" netns="/var/run/netns/cni-4cc4c500-e95b-593c-f82b-b1628a0da1aa" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.229 [INFO][3729] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" iface="eth0" netns="/var/run/netns/cni-4cc4c500-e95b-593c-f82b-b1628a0da1aa" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.231 [INFO][3729] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" iface="eth0" netns="/var/run/netns/cni-4cc4c500-e95b-593c-f82b-b1628a0da1aa" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.231 [INFO][3729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.231 [INFO][3729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.450 [INFO][3736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.452 [INFO][3736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.453 [INFO][3736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.462 [WARNING][3736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.462 [INFO][3736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.464 [INFO][3736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:03.470471 containerd[1469]: 2025-08-13 07:13:03.467 [INFO][3729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:03.474591 containerd[1469]: time="2025-08-13T07:13:03.473268826Z" level=info msg="TearDown network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" successfully" Aug 13 07:13:03.474591 containerd[1469]: time="2025-08-13T07:13:03.473307933Z" level=info msg="StopPodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" returns successfully" Aug 13 07:13:03.476516 systemd[1]: run-netns-cni\x2d4cc4c500\x2de95b\x2d593c\x2df82b\x2db1628a0da1aa.mount: Deactivated successfully. Aug 13 07:13:03.505679 kubelet[2490]: I0813 07:13:03.505512 2490 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-backend-key-pair\") pod \"6c49a143-9edc-4cb7-9608-17443dbfdb59\" (UID: \"6c49a143-9edc-4cb7-9608-17443dbfdb59\") " Aug 13 07:13:03.505679 kubelet[2490]: I0813 07:13:03.505618 2490 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q2gl\" (UniqueName: \"kubernetes.io/projected/6c49a143-9edc-4cb7-9608-17443dbfdb59-kube-api-access-2q2gl\") pod \"6c49a143-9edc-4cb7-9608-17443dbfdb59\" (UID: \"6c49a143-9edc-4cb7-9608-17443dbfdb59\") " Aug 13 07:13:03.511477 kubelet[2490]: I0813 07:13:03.511433 2490 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-ca-bundle\") pod \"6c49a143-9edc-4cb7-9608-17443dbfdb59\" (UID: \"6c49a143-9edc-4cb7-9608-17443dbfdb59\") " Aug 13 07:13:03.530325 systemd[1]: var-lib-kubelet-pods-6c49a143\x2d9edc\x2d4cb7\x2d9608\x2d17443dbfdb59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2q2gl.mount: Deactivated successfully. Aug 13 07:13:03.535131 systemd[1]: var-lib-kubelet-pods-6c49a143\x2d9edc\x2d4cb7\x2d9608\x2d17443dbfdb59-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:13:03.540928 kubelet[2490]: I0813 07:13:03.538743 2490 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c49a143-9edc-4cb7-9608-17443dbfdb59-kube-api-access-2q2gl" (OuterVolumeSpecName: "kube-api-access-2q2gl") pod "6c49a143-9edc-4cb7-9608-17443dbfdb59" (UID: "6c49a143-9edc-4cb7-9608-17443dbfdb59"). InnerVolumeSpecName "kube-api-access-2q2gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:13:03.540928 kubelet[2490]: I0813 07:13:03.540877 2490 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6c49a143-9edc-4cb7-9608-17443dbfdb59" (UID: "6c49a143-9edc-4cb7-9608-17443dbfdb59"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:13:03.542817 kubelet[2490]: I0813 07:13:03.542740 2490 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6c49a143-9edc-4cb7-9608-17443dbfdb59" (UID: "6c49a143-9edc-4cb7-9608-17443dbfdb59"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:13:03.613567 kubelet[2490]: I0813 07:13:03.613504 2490 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-backend-key-pair\") on node \"ci-4081.3.5-a-2a2ab8bcea\" DevicePath \"\"" Aug 13 07:13:03.613567 kubelet[2490]: I0813 07:13:03.613555 2490 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q2gl\" (UniqueName: \"kubernetes.io/projected/6c49a143-9edc-4cb7-9608-17443dbfdb59-kube-api-access-2q2gl\") on node \"ci-4081.3.5-a-2a2ab8bcea\" DevicePath \"\"" Aug 13 07:13:03.613567 kubelet[2490]: I0813 07:13:03.613570 2490 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c49a143-9edc-4cb7-9608-17443dbfdb59-whisker-ca-bundle\") on node \"ci-4081.3.5-a-2a2ab8bcea\" DevicePath \"\"" Aug 13 07:13:03.646450 systemd[1]: Removed slice kubepods-besteffort-pod6c49a143_9edc_4cb7_9608_17443dbfdb59.slice - libcontainer container kubepods-besteffort-pod6c49a143_9edc_4cb7_9608_17443dbfdb59.slice. Aug 13 07:13:03.702070 kubelet[2490]: I0813 07:13:03.694383 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xbdpl" podStartSLOduration=2.654616337 podStartE2EDuration="21.685582521s" podCreationTimestamp="2025-08-13 07:12:42 +0000 UTC" firstStartedPulling="2025-08-13 07:12:43.260694589 +0000 UTC m=+23.201153432" lastFinishedPulling="2025-08-13 07:13:02.291660765 +0000 UTC m=+42.232119616" observedRunningTime="2025-08-13 07:13:03.670109304 +0000 UTC m=+43.610568175" watchObservedRunningTime="2025-08-13 07:13:03.685582521 +0000 UTC m=+43.626041373" Aug 13 07:13:03.749398 systemd[1]: Created slice kubepods-besteffort-pod8cb52bbd_d2b4_4a4a_9cbd_3fd9977c2994.slice - libcontainer container kubepods-besteffort-pod8cb52bbd_d2b4_4a4a_9cbd_3fd9977c2994.slice. Aug 13 07:13:03.815699 kubelet[2490]: I0813 07:13:03.815640 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994-whisker-backend-key-pair\") pod \"whisker-7c665b6f4-jwqhs\" (UID: \"8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994\") " pod="calico-system/whisker-7c665b6f4-jwqhs" Aug 13 07:13:03.815699 kubelet[2490]: I0813 07:13:03.815709 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994-whisker-ca-bundle\") pod \"whisker-7c665b6f4-jwqhs\" (UID: \"8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994\") " pod="calico-system/whisker-7c665b6f4-jwqhs" Aug 13 07:13:03.816025 kubelet[2490]: I0813 07:13:03.815739 2490 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrwp\" (UniqueName: \"kubernetes.io/projected/8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994-kube-api-access-vfrwp\") pod \"whisker-7c665b6f4-jwqhs\" (UID: \"8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994\") " pod="calico-system/whisker-7c665b6f4-jwqhs" Aug 13 07:13:04.054922 containerd[1469]: time="2025-08-13T07:13:04.054703391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c665b6f4-jwqhs,Uid:8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994,Namespace:calico-system,Attempt:0,}" Aug 13 07:13:04.242012 systemd-networkd[1362]: cali2d08778883d: Link UP Aug 13 07:13:04.242764 systemd-networkd[1362]: cali2d08778883d: Gained carrier Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.103 [INFO][3757] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.117 [INFO][3757] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0 whisker-7c665b6f4- calico-system 8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994 942 0 2025-08-13 07:13:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c665b6f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea whisker-7c665b6f4-jwqhs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d08778883d [] [] }} ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.117 [INFO][3757] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.161 [INFO][3769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" HandleID="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.161 [INFO][3769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" HandleID="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"whisker-7c665b6f4-jwqhs", "timestamp":"2025-08-13 07:13:04.161064061 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.161 [INFO][3769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.161 [INFO][3769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.161 [INFO][3769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.171 [INFO][3769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.184 [INFO][3769] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.192 [INFO][3769] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.195 [INFO][3769] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.198 [INFO][3769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.198 [INFO][3769] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.201 [INFO][3769] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721 Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.206 [INFO][3769] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.218 [INFO][3769] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.65/26] block=192.168.89.64/26 handle="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.218 [INFO][3769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.65/26] handle="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.218 [INFO][3769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:04.282216 containerd[1469]: 2025-08-13 07:13:04.218 [INFO][3769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.65/26] IPv6=[] ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" HandleID="k8s-pod-network.92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.283439 containerd[1469]: 2025-08-13 07:13:04.223 [INFO][3757] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0", GenerateName:"whisker-7c665b6f4-", Namespace:"calico-system", SelfLink:"", UID:"8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c665b6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"whisker-7c665b6f4-jwqhs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d08778883d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:04.283439 containerd[1469]: 2025-08-13 07:13:04.224 [INFO][3757] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.65/32] ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.283439 containerd[1469]: 2025-08-13 07:13:04.224 [INFO][3757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d08778883d ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.283439 containerd[1469]: 2025-08-13 07:13:04.240 [INFO][3757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.283439 containerd[1469]: 2025-08-13 07:13:04.241 [INFO][3757] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0", GenerateName:"whisker-7c665b6f4-", Namespace:"calico-system", SelfLink:"", UID:"8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c665b6f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721", Pod:"whisker-7c665b6f4-jwqhs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d08778883d", MAC:"36:38:9d:71:fd:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:04.283439 containerd[1469]: 2025-08-13 07:13:04.267 [INFO][3757] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721" Namespace="calico-system" Pod="whisker-7c665b6f4-jwqhs" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--7c665b6f4--jwqhs-eth0" Aug 13 07:13:04.303394 kubelet[2490]: I0813 07:13:04.299230 2490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c49a143-9edc-4cb7-9608-17443dbfdb59" path="/var/lib/kubelet/pods/6c49a143-9edc-4cb7-9608-17443dbfdb59/volumes" Aug 13 07:13:04.352223 containerd[1469]: time="2025-08-13T07:13:04.351892052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:04.353605 containerd[1469]: time="2025-08-13T07:13:04.353435459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:04.353605 containerd[1469]: time="2025-08-13T07:13:04.353508482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:04.355411 containerd[1469]: time="2025-08-13T07:13:04.355126609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:04.399171 systemd[1]: Started cri-containerd-92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721.scope - libcontainer container 92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721. Aug 13 07:13:04.484906 containerd[1469]: time="2025-08-13T07:13:04.484772508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c665b6f4-jwqhs,Uid:8cb52bbd-d2b4-4a4a-9cbd-3fd9977c2994,Namespace:calico-system,Attempt:0,} returns sandbox id \"92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721\"" Aug 13 07:13:04.490209 containerd[1469]: time="2025-08-13T07:13:04.489648632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:13:04.669387 kubelet[2490]: I0813 07:13:04.669307 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:13:06.142411 containerd[1469]: time="2025-08-13T07:13:06.142354363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:06.143917 containerd[1469]: time="2025-08-13T07:13:06.143843498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:13:06.144597 containerd[1469]: time="2025-08-13T07:13:06.144563449Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:06.148101 containerd[1469]: time="2025-08-13T07:13:06.148042398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:06.152493 containerd[1469]: time="2025-08-13T07:13:06.152438342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.662731155s" Aug 13 07:13:06.152493 containerd[1469]: time="2025-08-13T07:13:06.152486497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:13:06.158872 containerd[1469]: time="2025-08-13T07:13:06.158807477Z" level=info msg="CreateContainer within sandbox \"92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:13:06.169731 systemd-networkd[1362]: cali2d08778883d: Gained IPv6LL Aug 13 07:13:06.189540 containerd[1469]: time="2025-08-13T07:13:06.189475055Z" level=info msg="CreateContainer within sandbox \"92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8b88b0d45d755f34266e0758b8d0c2296e9cf0760e59e67b7f70669737757cfd\"" Aug 13 07:13:06.193343 containerd[1469]: time="2025-08-13T07:13:06.191997324Z" level=info msg="StartContainer for \"8b88b0d45d755f34266e0758b8d0c2296e9cf0760e59e67b7f70669737757cfd\"" Aug 13 07:13:06.196061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621190524.mount: Deactivated successfully. Aug 13 07:13:06.278443 systemd[1]: Started cri-containerd-8b88b0d45d755f34266e0758b8d0c2296e9cf0760e59e67b7f70669737757cfd.scope - libcontainer container 8b88b0d45d755f34266e0758b8d0c2296e9cf0760e59e67b7f70669737757cfd. Aug 13 07:13:06.289820 containerd[1469]: time="2025-08-13T07:13:06.289717494Z" level=info msg="StopPodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\"" Aug 13 07:13:06.411390 containerd[1469]: time="2025-08-13T07:13:06.411031749Z" level=info msg="StartContainer for \"8b88b0d45d755f34266e0758b8d0c2296e9cf0760e59e67b7f70669737757cfd\" returns successfully" Aug 13 07:13:06.414926 containerd[1469]: time="2025-08-13T07:13:06.414765080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.386 [INFO][3972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.387 [INFO][3972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" iface="eth0" netns="/var/run/netns/cni-aabadaeb-2054-5d8c-64c5-1e928b7a769e" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.387 [INFO][3972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" iface="eth0" netns="/var/run/netns/cni-aabadaeb-2054-5d8c-64c5-1e928b7a769e" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.389 [INFO][3972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" iface="eth0" netns="/var/run/netns/cni-aabadaeb-2054-5d8c-64c5-1e928b7a769e" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.389 [INFO][3972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.389 [INFO][3972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.448 [INFO][3980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.448 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.448 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.464 [WARNING][3980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.464 [INFO][3980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.467 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:06.473243 containerd[1469]: 2025-08-13 07:13:06.470 [INFO][3972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:06.477202 containerd[1469]: time="2025-08-13T07:13:06.475329432Z" level=info msg="TearDown network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" successfully" Aug 13 07:13:06.477202 containerd[1469]: time="2025-08-13T07:13:06.475382183Z" level=info msg="StopPodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" returns successfully" Aug 13 07:13:06.478240 kubelet[2490]: E0813 07:13:06.477631 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:06.478755 containerd[1469]: time="2025-08-13T07:13:06.478503481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpfnk,Uid:7e60244f-7e5f-41c3-a2a5-c9680ee5ed60,Namespace:kube-system,Attempt:1,}" Aug 13 07:13:06.482676 systemd[1]: run-netns-cni\x2daabadaeb\x2d2054\x2d5d8c\x2d64c5\x2d1e928b7a769e.mount: Deactivated successfully. Aug 13 07:13:06.667766 systemd-networkd[1362]: cali523df21e401: Link UP Aug 13 07:13:06.672101 systemd-networkd[1362]: cali523df21e401: Gained carrier Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.532 [INFO][4001] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.548 [INFO][4001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0 coredns-7c65d6cfc9- kube-system 7e60244f-7e5f-41c3-a2a5-c9680ee5ed60 956 0 2025-08-13 07:12:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea coredns-7c65d6cfc9-gpfnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali523df21e401 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.548 [INFO][4001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.583 [INFO][4012] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" HandleID="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.584 [INFO][4012] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" HandleID="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"coredns-7c65d6cfc9-gpfnk", "timestamp":"2025-08-13 07:13:06.583820284 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.584 [INFO][4012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.584 [INFO][4012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.584 [INFO][4012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.594 [INFO][4012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.601 [INFO][4012] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.636 [INFO][4012] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.640 [INFO][4012] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.644 [INFO][4012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.644 [INFO][4012] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.646 [INFO][4012] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.652 [INFO][4012] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.660 [INFO][4012] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.66/26] block=192.168.89.64/26 handle="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.660 [INFO][4012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.66/26] handle="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.660 [INFO][4012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:06.710577 containerd[1469]: 2025-08-13 07:13:06.660 [INFO][4012] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.66/26] IPv6=[] ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" HandleID="k8s-pod-network.5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.713063 containerd[1469]: 2025-08-13 07:13:06.663 [INFO][4001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"coredns-7c65d6cfc9-gpfnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523df21e401", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:06.713063 containerd[1469]: 2025-08-13 07:13:06.663 [INFO][4001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.66/32] ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.713063 containerd[1469]: 2025-08-13 07:13:06.664 [INFO][4001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali523df21e401 ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.713063 containerd[1469]: 2025-08-13 07:13:06.672 [INFO][4001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.713063 containerd[1469]: 2025-08-13 07:13:06.676 [INFO][4001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc", Pod:"coredns-7c65d6cfc9-gpfnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523df21e401", MAC:"32:90:6b:75:36:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:06.713063 containerd[1469]: 2025-08-13 07:13:06.706 [INFO][4001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gpfnk" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:06.739916 containerd[1469]: time="2025-08-13T07:13:06.739743323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:06.740396 containerd[1469]: time="2025-08-13T07:13:06.740337468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:06.740589 containerd[1469]: time="2025-08-13T07:13:06.740515428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:06.740886 containerd[1469]: time="2025-08-13T07:13:06.740755034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:06.762499 systemd[1]: Started cri-containerd-5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc.scope - libcontainer container 5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc. Aug 13 07:13:06.830856 kubelet[2490]: I0813 07:13:06.830721 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:13:06.835405 containerd[1469]: time="2025-08-13T07:13:06.835342137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gpfnk,Uid:7e60244f-7e5f-41c3-a2a5-c9680ee5ed60,Namespace:kube-system,Attempt:1,} returns sandbox id \"5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc\"" Aug 13 07:13:06.837165 kubelet[2490]: E0813 07:13:06.836470 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:06.842683 containerd[1469]: time="2025-08-13T07:13:06.842469873Z" level=info msg="CreateContainer within sandbox \"5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:13:06.874636 containerd[1469]: time="2025-08-13T07:13:06.874575840Z" level=info msg="CreateContainer within sandbox \"5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0733457af164409581af58a965569802943d57348ad5b1c992565b682c34207a\"" Aug 13 07:13:06.876307 containerd[1469]: time="2025-08-13T07:13:06.875609985Z" level=info msg="StartContainer for \"0733457af164409581af58a965569802943d57348ad5b1c992565b682c34207a\"" Aug 13 07:13:06.922457 systemd[1]: Started cri-containerd-0733457af164409581af58a965569802943d57348ad5b1c992565b682c34207a.scope - libcontainer container 0733457af164409581af58a965569802943d57348ad5b1c992565b682c34207a. Aug 13 07:13:06.992298 containerd[1469]: time="2025-08-13T07:13:06.991893999Z" level=info msg="StartContainer for \"0733457af164409581af58a965569802943d57348ad5b1c992565b682c34207a\" returns successfully" Aug 13 07:13:07.287531 containerd[1469]: time="2025-08-13T07:13:07.286018663Z" level=info msg="StopPodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\"" Aug 13 07:13:07.287531 containerd[1469]: time="2025-08-13T07:13:07.286086056Z" level=info msg="StopPodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\"" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.419 [INFO][4178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.419 [INFO][4178] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" iface="eth0" netns="/var/run/netns/cni-09a048a9-3f09-5e14-d3d9-c0914fe1eeb8" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.420 [INFO][4178] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" iface="eth0" netns="/var/run/netns/cni-09a048a9-3f09-5e14-d3d9-c0914fe1eeb8" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.422 [INFO][4178] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" iface="eth0" netns="/var/run/netns/cni-09a048a9-3f09-5e14-d3d9-c0914fe1eeb8" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.422 [INFO][4178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.422 [INFO][4178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.512 [INFO][4198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.514 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.514 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.524 [WARNING][4198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.524 [INFO][4198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.527 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:07.539325 containerd[1469]: 2025-08-13 07:13:07.530 [INFO][4178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:07.539325 containerd[1469]: time="2025-08-13T07:13:07.538933543Z" level=info msg="TearDown network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" successfully" Aug 13 07:13:07.539325 containerd[1469]: time="2025-08-13T07:13:07.538977296Z" level=info msg="StopPodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" returns successfully" Aug 13 07:13:07.543830 containerd[1469]: time="2025-08-13T07:13:07.543532449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-z99t9,Uid:76bc530d-8815-4ca7-9b4b-bf84496266e1,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:13:07.547584 systemd[1]: run-netns-cni\x2d09a048a9\x2d3f09\x2d5e14\x2dd3d9\x2dc0914fe1eeb8.mount: Deactivated successfully. Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.418 [INFO][4171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.419 [INFO][4171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" iface="eth0" netns="/var/run/netns/cni-d46c62b1-dce2-b7bc-f2e7-dc4887a08276" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.420 [INFO][4171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" iface="eth0" netns="/var/run/netns/cni-d46c62b1-dce2-b7bc-f2e7-dc4887a08276" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.423 [INFO][4171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" iface="eth0" netns="/var/run/netns/cni-d46c62b1-dce2-b7bc-f2e7-dc4887a08276" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.424 [INFO][4171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.425 [INFO][4171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.513 [INFO][4200] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.514 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.527 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.550 [WARNING][4200] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.550 [INFO][4200] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.554 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:07.568550 containerd[1469]: 2025-08-13 07:13:07.560 [INFO][4171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:07.569707 containerd[1469]: time="2025-08-13T07:13:07.568736673Z" level=info msg="TearDown network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" successfully" Aug 13 07:13:07.569707 containerd[1469]: time="2025-08-13T07:13:07.569219856Z" level=info msg="StopPodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" returns successfully" Aug 13 07:13:07.573031 containerd[1469]: time="2025-08-13T07:13:07.572843452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-87hxl,Uid:2d1571e3-4e3e-4f96-8193-1271f6b023c0,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:13:07.589978 systemd[1]: run-netns-cni\x2dd46c62b1\x2ddce2\x2db7bc\x2df2e7\x2ddc4887a08276.mount: Deactivated successfully. Aug 13 07:13:07.708312 kubelet[2490]: E0813 07:13:07.706134 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:07.773378 kubelet[2490]: I0813 07:13:07.771120 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gpfnk" podStartSLOduration=42.771097267 podStartE2EDuration="42.771097267s" podCreationTimestamp="2025-08-13 07:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:13:07.749608223 +0000 UTC m=+47.690067070" watchObservedRunningTime="2025-08-13 07:13:07.771097267 +0000 UTC m=+47.711556120" Aug 13 07:13:07.956367 systemd-networkd[1362]: cali523df21e401: Gained IPv6LL Aug 13 07:13:07.984417 systemd-networkd[1362]: calif426492695e: Link UP Aug 13 07:13:07.986386 systemd-networkd[1362]: calif426492695e: Gained carrier Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.655 [INFO][4213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.698 [INFO][4213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0 calico-apiserver-8bc87ff86- calico-apiserver 76bc530d-8815-4ca7-9b4b-bf84496266e1 972 0 2025-08-13 07:12:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8bc87ff86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea calico-apiserver-8bc87ff86-z99t9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif426492695e [] [] }} ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.698 [INFO][4213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.822 [INFO][4238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" HandleID="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.824 [INFO][4238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" HandleID="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001256c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"calico-apiserver-8bc87ff86-z99t9", "timestamp":"2025-08-13 07:13:07.822615694 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.824 [INFO][4238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.824 [INFO][4238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.824 [INFO][4238] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.835 [INFO][4238] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.858 [INFO][4238] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.883 [INFO][4238] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.894 [INFO][4238] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.907 [INFO][4238] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.907 [INFO][4238] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.918 [INFO][4238] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62 Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.935 [INFO][4238] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.960 [INFO][4238] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.67/26] block=192.168.89.64/26 handle="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.960 [INFO][4238] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.67/26] handle="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.962 [INFO][4238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:08.036234 containerd[1469]: 2025-08-13 07:13:07.962 [INFO][4238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.67/26] IPv6=[] ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" HandleID="k8s-pod-network.94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.037414 containerd[1469]: 2025-08-13 07:13:07.969 [INFO][4213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"76bc530d-8815-4ca7-9b4b-bf84496266e1", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"calico-apiserver-8bc87ff86-z99t9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif426492695e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:08.037414 containerd[1469]: 2025-08-13 07:13:07.969 [INFO][4213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.67/32] ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.037414 containerd[1469]: 2025-08-13 07:13:07.969 [INFO][4213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif426492695e ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.037414 containerd[1469]: 2025-08-13 07:13:07.984 [INFO][4213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.037414 containerd[1469]: 2025-08-13 07:13:07.992 [INFO][4213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"76bc530d-8815-4ca7-9b4b-bf84496266e1", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62", Pod:"calico-apiserver-8bc87ff86-z99t9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif426492695e", MAC:"02:65:43:56:8f:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:08.037414 containerd[1469]: 2025-08-13 07:13:08.025 [INFO][4213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-z99t9" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:08.087210 containerd[1469]: time="2025-08-13T07:13:08.086644945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:08.087210 containerd[1469]: time="2025-08-13T07:13:08.086824325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:08.087210 containerd[1469]: time="2025-08-13T07:13:08.086868703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:08.087210 containerd[1469]: time="2025-08-13T07:13:08.087162649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:08.105143 systemd-networkd[1362]: calia9b8398d2af: Link UP Aug 13 07:13:08.109475 systemd-networkd[1362]: calia9b8398d2af: Gained carrier Aug 13 07:13:08.140541 systemd[1]: Started cri-containerd-94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62.scope - libcontainer container 94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62. Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.784 [INFO][4226] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.817 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0 calico-apiserver-8bc87ff86- calico-apiserver 2d1571e3-4e3e-4f96-8193-1271f6b023c0 971 0 2025-08-13 07:12:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8bc87ff86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea calico-apiserver-8bc87ff86-87hxl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia9b8398d2af [] [] }} ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.817 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.921 [INFO][4248] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" HandleID="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.928 [INFO][4248] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" HandleID="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"calico-apiserver-8bc87ff86-87hxl", "timestamp":"2025-08-13 07:13:07.921097672 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.928 [INFO][4248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.961 [INFO][4248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.961 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:07.971 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.009 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.032 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.039 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.051 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.051 [INFO][4248] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.055 [INFO][4248] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53 Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.067 [INFO][4248] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.094 [INFO][4248] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.68/26] block=192.168.89.64/26 handle="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.094 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.68/26] handle="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.094 [INFO][4248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:08.146376 containerd[1469]: 2025-08-13 07:13:08.094 [INFO][4248] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.68/26] IPv6=[] ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" HandleID="k8s-pod-network.3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.147866 containerd[1469]: 2025-08-13 07:13:08.098 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d1571e3-4e3e-4f96-8193-1271f6b023c0", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"calico-apiserver-8bc87ff86-87hxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9b8398d2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:08.147866 containerd[1469]: 2025-08-13 07:13:08.099 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.68/32] ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.147866 containerd[1469]: 2025-08-13 07:13:08.099 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9b8398d2af ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.147866 containerd[1469]: 2025-08-13 07:13:08.112 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.147866 containerd[1469]: 2025-08-13 07:13:08.115 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d1571e3-4e3e-4f96-8193-1271f6b023c0", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53", Pod:"calico-apiserver-8bc87ff86-87hxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9b8398d2af", MAC:"ca:b8:87:9c:0c:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:08.147866 containerd[1469]: 2025-08-13 07:13:08.139 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53" Namespace="calico-apiserver" Pod="calico-apiserver-8bc87ff86-87hxl" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:08.192403 containerd[1469]: time="2025-08-13T07:13:08.189934937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:08.192403 containerd[1469]: time="2025-08-13T07:13:08.190003476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:08.192403 containerd[1469]: time="2025-08-13T07:13:08.190031788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:08.192403 containerd[1469]: time="2025-08-13T07:13:08.190135300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:08.240742 systemd[1]: run-containerd-runc-k8s.io-3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53-runc.PNOVh3.mount: Deactivated successfully. Aug 13 07:13:08.249530 systemd[1]: Started cri-containerd-3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53.scope - libcontainer container 3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53. Aug 13 07:13:08.296195 containerd[1469]: time="2025-08-13T07:13:08.295845692Z" level=info msg="StopPodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\"" Aug 13 07:13:08.336086 containerd[1469]: time="2025-08-13T07:13:08.336042006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-z99t9,Uid:76bc530d-8815-4ca7-9b4b-bf84496266e1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62\"" Aug 13 07:13:08.394449 containerd[1469]: time="2025-08-13T07:13:08.394301347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bc87ff86-87hxl,Uid:2d1571e3-4e3e-4f96-8193-1271f6b023c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53\"" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.434 [INFO][4359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.436 [INFO][4359] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" iface="eth0" netns="/var/run/netns/cni-7ba1dd6b-4178-1332-0ea1-5df179152a96" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.437 [INFO][4359] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" iface="eth0" netns="/var/run/netns/cni-7ba1dd6b-4178-1332-0ea1-5df179152a96" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.437 [INFO][4359] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" iface="eth0" netns="/var/run/netns/cni-7ba1dd6b-4178-1332-0ea1-5df179152a96" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.437 [INFO][4359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.437 [INFO][4359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.474 [INFO][4374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.475 [INFO][4374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.475 [INFO][4374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.484 [WARNING][4374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.484 [INFO][4374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.487 [INFO][4374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:08.492321 containerd[1469]: 2025-08-13 07:13:08.489 [INFO][4359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:08.500950 containerd[1469]: time="2025-08-13T07:13:08.494140403Z" level=info msg="TearDown network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" successfully" Aug 13 07:13:08.500950 containerd[1469]: time="2025-08-13T07:13:08.494214943Z" level=info msg="StopPodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" returns successfully" Aug 13 07:13:08.504808 systemd[1]: run-netns-cni\x2d7ba1dd6b\x2d4178\x2d1332\x2d0ea1\x2d5df179152a96.mount: Deactivated successfully. Aug 13 07:13:08.515253 containerd[1469]: time="2025-08-13T07:13:08.514462100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d686c46f5-79swt,Uid:60505575-c00e-488e-b314-2e2a9dc9b111,Namespace:calico-system,Attempt:1,}" Aug 13 07:13:08.778019 kubelet[2490]: E0813 07:13:08.776941 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:08.915548 systemd-networkd[1362]: cali0943ae18732: Link UP Aug 13 07:13:08.922306 systemd-networkd[1362]: cali0943ae18732: Gained carrier Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.615 [INFO][4387] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.643 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0 calico-kube-controllers-6d686c46f5- calico-system 60505575-c00e-488e-b314-2e2a9dc9b111 995 0 2025-08-13 07:12:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d686c46f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea calico-kube-controllers-6d686c46f5-79swt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0943ae18732 [] [] }} ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.643 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.758 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" HandleID="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.758 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" HandleID="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a46e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"calico-kube-controllers-6d686c46f5-79swt", "timestamp":"2025-08-13 07:13:08.758329677 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.758 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.758 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.759 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.783 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.802 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.839 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.847 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.857 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.857 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.866 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7 Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.874 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.890 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.69/26] block=192.168.89.64/26 handle="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.890 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.69/26] handle="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.890 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:08.965555 containerd[1469]: 2025-08-13 07:13:08.890 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.69/26] IPv6=[] ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" HandleID="k8s-pod-network.ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.967891 containerd[1469]: 2025-08-13 07:13:08.898 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0", GenerateName:"calico-kube-controllers-6d686c46f5-", Namespace:"calico-system", SelfLink:"", UID:"60505575-c00e-488e-b314-2e2a9dc9b111", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d686c46f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"calico-kube-controllers-6d686c46f5-79swt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0943ae18732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:08.967891 containerd[1469]: 2025-08-13 07:13:08.900 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.69/32] ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.967891 containerd[1469]: 2025-08-13 07:13:08.900 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0943ae18732 ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.967891 containerd[1469]: 2025-08-13 07:13:08.928 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:08.967891 containerd[1469]: 2025-08-13 07:13:08.931 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0", GenerateName:"calico-kube-controllers-6d686c46f5-", Namespace:"calico-system", SelfLink:"", UID:"60505575-c00e-488e-b314-2e2a9dc9b111", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d686c46f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7", Pod:"calico-kube-controllers-6d686c46f5-79swt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0943ae18732", MAC:"76:d2:0c:48:05:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:08.967891 containerd[1469]: 2025-08-13 07:13:08.954 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7" Namespace="calico-system" Pod="calico-kube-controllers-6d686c46f5-79swt" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:09.043105 containerd[1469]: time="2025-08-13T07:13:09.042840246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:09.043105 containerd[1469]: time="2025-08-13T07:13:09.042978093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:09.043541 containerd[1469]: time="2025-08-13T07:13:09.042998265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:09.045733 containerd[1469]: time="2025-08-13T07:13:09.045218454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:09.120501 systemd[1]: Started cri-containerd-ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7.scope - libcontainer container ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7. Aug 13 07:13:09.288232 containerd[1469]: time="2025-08-13T07:13:09.287564603Z" level=info msg="StopPodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\"" Aug 13 07:13:09.304537 containerd[1469]: time="2025-08-13T07:13:09.294695181Z" level=info msg="StopPodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\"" Aug 13 07:13:09.427838 systemd-networkd[1362]: calif426492695e: Gained IPv6LL Aug 13 07:13:09.512037 containerd[1469]: time="2025-08-13T07:13:09.510987404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d686c46f5-79swt,Uid:60505575-c00e-488e-b314-2e2a9dc9b111,Namespace:calico-system,Attempt:1,} returns sandbox id \"ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7\"" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.504 [INFO][4489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.507 [INFO][4489] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" iface="eth0" netns="/var/run/netns/cni-b571ec2b-8ba0-6d04-7e05-489a1d7e21e4" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.514 [INFO][4489] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" iface="eth0" netns="/var/run/netns/cni-b571ec2b-8ba0-6d04-7e05-489a1d7e21e4" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.516 [INFO][4489] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" iface="eth0" netns="/var/run/netns/cni-b571ec2b-8ba0-6d04-7e05-489a1d7e21e4" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.516 [INFO][4489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.516 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.648 [INFO][4510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.648 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.648 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.669 [WARNING][4510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.669 [INFO][4510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.681 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:09.704264 containerd[1469]: 2025-08-13 07:13:09.692 [INFO][4489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:09.708634 containerd[1469]: time="2025-08-13T07:13:09.706273732Z" level=info msg="TearDown network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" successfully" Aug 13 07:13:09.708634 containerd[1469]: time="2025-08-13T07:13:09.706319793Z" level=info msg="StopPodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" returns successfully" Aug 13 07:13:09.712211 containerd[1469]: time="2025-08-13T07:13:09.711868477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-svvdd,Uid:bac3cbef-457c-46f3-9f40-8d0637f6d8c5,Namespace:calico-system,Attempt:1,}" Aug 13 07:13:09.715591 systemd[1]: run-netns-cni\x2db571ec2b\x2d8ba0\x2d6d04\x2d7e05\x2d489a1d7e21e4.mount: Deactivated successfully. Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.542 [INFO][4488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.543 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" iface="eth0" netns="/var/run/netns/cni-3e0e5c0b-0026-8e70-8ddd-6bd2dd346a4c" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.544 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" iface="eth0" netns="/var/run/netns/cni-3e0e5c0b-0026-8e70-8ddd-6bd2dd346a4c" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.546 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" iface="eth0" netns="/var/run/netns/cni-3e0e5c0b-0026-8e70-8ddd-6bd2dd346a4c" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.546 [INFO][4488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.546 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.716 [INFO][4515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.716 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.716 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.742 [WARNING][4515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.746 [INFO][4515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.761 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:09.790225 containerd[1469]: 2025-08-13 07:13:09.772 [INFO][4488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:09.794232 containerd[1469]: time="2025-08-13T07:13:09.793931971Z" level=info msg="TearDown network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" successfully" Aug 13 07:13:09.794232 containerd[1469]: time="2025-08-13T07:13:09.793979441Z" level=info msg="StopPodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" returns successfully" Aug 13 07:13:09.795209 kubelet[2490]: E0813 07:13:09.795147 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:09.805395 systemd[1]: run-netns-cni\x2d3e0e5c0b\x2d0026\x2d8e70\x2d8ddd\x2d6bd2dd346a4c.mount: Deactivated successfully. Aug 13 07:13:09.806099 containerd[1469]: time="2025-08-13T07:13:09.805150999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rjt66,Uid:bc80eeef-1a82-4810-afd6-bf52e91a865f,Namespace:kube-system,Attempt:1,}" Aug 13 07:13:10.132479 systemd-networkd[1362]: calia9b8398d2af: Gained IPv6LL Aug 13 07:13:10.261248 kubelet[2490]: I0813 07:13:10.260502 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:13:10.261248 kubelet[2490]: E0813 07:13:10.261056 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:10.267225 systemd-networkd[1362]: calide06c13e58e: Link UP Aug 13 07:13:10.270551 systemd-networkd[1362]: calide06c13e58e: Gained carrier Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:09.919 [INFO][4526] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:09.960 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0 goldmane-58fd7646b9- calico-system bac3cbef-457c-46f3-9f40-8d0637f6d8c5 1003 0 2025-08-13 07:12:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea goldmane-58fd7646b9-svvdd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calide06c13e58e [] [] }} ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:09.960 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.112 [INFO][4556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" HandleID="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.112 [INFO][4556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" HandleID="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003276c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"goldmane-58fd7646b9-svvdd", "timestamp":"2025-08-13 07:13:10.112354281 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.112 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.112 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.112 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.127 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.138 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.155 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.165 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.177 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.182 [INFO][4556] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.194 [INFO][4556] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9 Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.208 [INFO][4556] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.234 [INFO][4556] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.70/26] block=192.168.89.64/26 handle="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.234 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.70/26] handle="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.234 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:10.324743 containerd[1469]: 2025-08-13 07:13:10.238 [INFO][4556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.70/26] IPv6=[] ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" HandleID="k8s-pod-network.da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.330786 containerd[1469]: 2025-08-13 07:13:10.248 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bac3cbef-457c-46f3-9f40-8d0637f6d8c5", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"goldmane-58fd7646b9-svvdd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide06c13e58e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:10.330786 containerd[1469]: 2025-08-13 07:13:10.249 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.70/32] ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.330786 containerd[1469]: 2025-08-13 07:13:10.249 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide06c13e58e ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.330786 containerd[1469]: 2025-08-13 07:13:10.269 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.330786 containerd[1469]: 2025-08-13 07:13:10.279 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bac3cbef-457c-46f3-9f40-8d0637f6d8c5", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9", Pod:"goldmane-58fd7646b9-svvdd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide06c13e58e", MAC:"9a:72:86:96:c8:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:10.330786 containerd[1469]: 2025-08-13 07:13:10.318 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9" Namespace="calico-system" Pod="goldmane-58fd7646b9-svvdd" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:10.429508 systemd-networkd[1362]: cali7d88052877e: Link UP Aug 13 07:13:10.444314 systemd-networkd[1362]: cali7d88052877e: Gained carrier Aug 13 07:13:10.529751 containerd[1469]: time="2025-08-13T07:13:10.527042586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:10.529751 containerd[1469]: time="2025-08-13T07:13:10.527110953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:10.529751 containerd[1469]: time="2025-08-13T07:13:10.527126640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:10.540658 containerd[1469]: time="2025-08-13T07:13:10.532572350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.032 [INFO][4540] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.080 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0 coredns-7c65d6cfc9- kube-system bc80eeef-1a82-4810-afd6-bf52e91a865f 1005 0 2025-08-13 07:12:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea coredns-7c65d6cfc9-rjt66 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d88052877e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.080 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.231 [INFO][4562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" HandleID="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.231 [INFO][4562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" HandleID="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"coredns-7c65d6cfc9-rjt66", "timestamp":"2025-08-13 07:13:10.231029908 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.231 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.234 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.234 [INFO][4562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.245 [INFO][4562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.283 [INFO][4562] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.320 [INFO][4562] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.327 [INFO][4562] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.333 [INFO][4562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.333 [INFO][4562] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.337 [INFO][4562] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3 Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.349 [INFO][4562] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.367 [INFO][4562] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.71/26] block=192.168.89.64/26 handle="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.367 [INFO][4562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.71/26] handle="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.367 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:10.545832 containerd[1469]: 2025-08-13 07:13:10.368 [INFO][4562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.71/26] IPv6=[] ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" HandleID="k8s-pod-network.9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.549338 containerd[1469]: 2025-08-13 07:13:10.379 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bc80eeef-1a82-4810-afd6-bf52e91a865f", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"coredns-7c65d6cfc9-rjt66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d88052877e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:10.549338 containerd[1469]: 2025-08-13 07:13:10.379 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.71/32] ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.549338 containerd[1469]: 2025-08-13 07:13:10.379 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d88052877e ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.549338 containerd[1469]: 2025-08-13 07:13:10.461 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.549338 containerd[1469]: 2025-08-13 07:13:10.464 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bc80eeef-1a82-4810-afd6-bf52e91a865f", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3", Pod:"coredns-7c65d6cfc9-rjt66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d88052877e", MAC:"22:5b:50:c2:6b:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:10.549338 containerd[1469]: 2025-08-13 07:13:10.524 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rjt66" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:10.650088 systemd[1]: run-containerd-runc-k8s.io-da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9-runc.on4KmS.mount: Deactivated successfully. Aug 13 07:13:10.672719 systemd[1]: Started cri-containerd-da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9.scope - libcontainer container da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9. Aug 13 07:13:10.700596 containerd[1469]: time="2025-08-13T07:13:10.700116838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:10.701904 containerd[1469]: time="2025-08-13T07:13:10.700925201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:10.701904 containerd[1469]: time="2025-08-13T07:13:10.700953361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:10.701904 containerd[1469]: time="2025-08-13T07:13:10.701062586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:10.707552 systemd-networkd[1362]: cali0943ae18732: Gained IPv6LL Aug 13 07:13:10.757472 systemd[1]: Started cri-containerd-9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3.scope - libcontainer container 9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3. Aug 13 07:13:10.794393 kubelet[2490]: E0813 07:13:10.794083 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:10.937663 containerd[1469]: time="2025-08-13T07:13:10.937604916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rjt66,Uid:bc80eeef-1a82-4810-afd6-bf52e91a865f,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3\"" Aug 13 07:13:10.939760 kubelet[2490]: E0813 07:13:10.939724 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:10.948743 containerd[1469]: time="2025-08-13T07:13:10.947881269Z" level=info msg="CreateContainer within sandbox \"9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:13:10.956521 containerd[1469]: time="2025-08-13T07:13:10.956399655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-svvdd,Uid:bac3cbef-457c-46f3-9f40-8d0637f6d8c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9\"" Aug 13 07:13:11.011873 containerd[1469]: time="2025-08-13T07:13:11.011726406Z" level=info msg="CreateContainer within sandbox \"9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59b2b42ea166a946cc7cf7a89e9f92b127ebe28bd7d4bccdfdf4b292a798d405\"" Aug 13 07:13:11.018156 containerd[1469]: time="2025-08-13T07:13:11.017822665Z" level=info msg="StartContainer for \"59b2b42ea166a946cc7cf7a89e9f92b127ebe28bd7d4bccdfdf4b292a798d405\"" Aug 13 07:13:11.146417 systemd[1]: Started cri-containerd-59b2b42ea166a946cc7cf7a89e9f92b127ebe28bd7d4bccdfdf4b292a798d405.scope - libcontainer container 59b2b42ea166a946cc7cf7a89e9f92b127ebe28bd7d4bccdfdf4b292a798d405. Aug 13 07:13:11.263911 containerd[1469]: time="2025-08-13T07:13:11.262802052Z" level=info msg="StartContainer for \"59b2b42ea166a946cc7cf7a89e9f92b127ebe28bd7d4bccdfdf4b292a798d405\" returns successfully" Aug 13 07:13:11.286993 containerd[1469]: time="2025-08-13T07:13:11.286514388Z" level=info msg="StopPodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\"" Aug 13 07:13:11.454486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478338514.mount: Deactivated successfully. Aug 13 07:13:11.481929 containerd[1469]: time="2025-08-13T07:13:11.481417514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:11.484093 containerd[1469]: time="2025-08-13T07:13:11.483116950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:13:11.486323 containerd[1469]: time="2025-08-13T07:13:11.485117868Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:11.497585 containerd[1469]: time="2025-08-13T07:13:11.495432689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:11.500319 containerd[1469]: time="2025-08-13T07:13:11.496634456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.081711371s" Aug 13 07:13:11.500319 containerd[1469]: time="2025-08-13T07:13:11.500316231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:13:11.510171 containerd[1469]: time="2025-08-13T07:13:11.510111205Z" level=info msg="CreateContainer within sandbox \"92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:13:11.510627 containerd[1469]: time="2025-08-13T07:13:11.510375635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:13:11.540003 containerd[1469]: time="2025-08-13T07:13:11.538532254Z" level=info msg="CreateContainer within sandbox \"92148681e1bbb730c10f3692fdfc30ab04eab002da4904ed73e9baaca09c9721\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f1eb1bf1cfd04d8976b55da8f1b98a671d89581e0d547984a3e82d972c0a1f85\"" Aug 13 07:13:11.544412 containerd[1469]: time="2025-08-13T07:13:11.543376076Z" level=info msg="StartContainer for \"f1eb1bf1cfd04d8976b55da8f1b98a671d89581e0d547984a3e82d972c0a1f85\"" Aug 13 07:13:11.633916 systemd[1]: Started cri-containerd-f1eb1bf1cfd04d8976b55da8f1b98a671d89581e0d547984a3e82d972c0a1f85.scope - libcontainer container f1eb1bf1cfd04d8976b55da8f1b98a671d89581e0d547984a3e82d972c0a1f85. Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.492 [INFO][4736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.493 [INFO][4736] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" iface="eth0" netns="/var/run/netns/cni-ca8d7b94-34d7-f2e1-1809-4325c4f2417c" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.493 [INFO][4736] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" iface="eth0" netns="/var/run/netns/cni-ca8d7b94-34d7-f2e1-1809-4325c4f2417c" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.498 [INFO][4736] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" iface="eth0" netns="/var/run/netns/cni-ca8d7b94-34d7-f2e1-1809-4325c4f2417c" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.498 [INFO][4736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.498 [INFO][4736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.601 [INFO][4752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.601 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.601 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.629 [WARNING][4752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.629 [INFO][4752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.636 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:11.663966 containerd[1469]: 2025-08-13 07:13:11.647 [INFO][4736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:11.664595 containerd[1469]: time="2025-08-13T07:13:11.664094530Z" level=info msg="TearDown network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" successfully" Aug 13 07:13:11.664595 containerd[1469]: time="2025-08-13T07:13:11.664133552Z" level=info msg="StopPodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" returns successfully" Aug 13 07:13:11.667330 containerd[1469]: time="2025-08-13T07:13:11.666804090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6lh,Uid:2c51012b-f93a-4143-8128-6925ff4126f6,Namespace:calico-system,Attempt:1,}" Aug 13 07:13:11.731374 systemd-networkd[1362]: cali7d88052877e: Gained IPv6LL Aug 13 07:13:11.824912 containerd[1469]: time="2025-08-13T07:13:11.824309302Z" level=info msg="StartContainer for \"f1eb1bf1cfd04d8976b55da8f1b98a671d89581e0d547984a3e82d972c0a1f85\" returns successfully" Aug 13 07:13:11.849927 kubelet[2490]: E0813 07:13:11.849874 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:11.903754 kubelet[2490]: I0813 07:13:11.903648 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rjt66" podStartSLOduration=46.903617051 podStartE2EDuration="46.903617051s" podCreationTimestamp="2025-08-13 07:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:13:11.901989475 +0000 UTC m=+51.842448336" watchObservedRunningTime="2025-08-13 07:13:11.903617051 +0000 UTC m=+51.844075904" Aug 13 07:13:11.923490 systemd-networkd[1362]: calide06c13e58e: Gained IPv6LL Aug 13 07:13:11.998692 systemd-networkd[1362]: caliec41609144f: Link UP Aug 13 07:13:12.000928 systemd-networkd[1362]: caliec41609144f: Gained carrier Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.718 [INFO][4788] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.751 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0 csi-node-driver- calico-system 2c51012b-f93a-4143-8128-6925ff4126f6 1031 0 2025-08-13 07:12:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-a-2a2ab8bcea csi-node-driver-5l6lh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliec41609144f [] [] }} ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.751 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.852 [INFO][4808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" HandleID="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.858 [INFO][4808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" HandleID="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-2a2ab8bcea", "pod":"csi-node-driver-5l6lh", "timestamp":"2025-08-13 07:13:11.852951533 +0000 UTC"}, Hostname:"ci-4081.3.5-a-2a2ab8bcea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.858 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.858 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.858 [INFO][4808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-2a2ab8bcea' Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.884 [INFO][4808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.897 [INFO][4808] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.922 [INFO][4808] ipam/ipam.go 511: Trying affinity for 192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.929 [INFO][4808] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.941 [INFO][4808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.942 [INFO][4808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.958 [INFO][4808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846 Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.970 [INFO][4808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.982 [INFO][4808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.89.72/26] block=192.168.89.64/26 handle="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.982 [INFO][4808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.72/26] handle="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" host="ci-4081.3.5-a-2a2ab8bcea" Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.982 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:12.033793 containerd[1469]: 2025-08-13 07:13:11.982 [INFO][4808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.89.72/26] IPv6=[] ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" HandleID="k8s-pod-network.169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.035690 containerd[1469]: 2025-08-13 07:13:11.989 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c51012b-f93a-4143-8128-6925ff4126f6", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"", Pod:"csi-node-driver-5l6lh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec41609144f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:12.035690 containerd[1469]: 2025-08-13 07:13:11.989 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.72/32] ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.035690 containerd[1469]: 2025-08-13 07:13:11.989 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec41609144f ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.035690 containerd[1469]: 2025-08-13 07:13:11.997 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.035690 containerd[1469]: 2025-08-13 07:13:12.000 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c51012b-f93a-4143-8128-6925ff4126f6", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846", Pod:"csi-node-driver-5l6lh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec41609144f", MAC:"32:c9:2c:e0:9a:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:12.035690 containerd[1469]: 2025-08-13 07:13:12.025 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846" Namespace="calico-system" Pod="csi-node-driver-5l6lh" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:12.083420 kernel: bpftool[4869]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:13:12.088998 containerd[1469]: time="2025-08-13T07:13:12.087516552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:13:12.088998 containerd[1469]: time="2025-08-13T07:13:12.087616312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:13:12.088998 containerd[1469]: time="2025-08-13T07:13:12.087634121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:12.089360 containerd[1469]: time="2025-08-13T07:13:12.088967654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:13:12.121519 systemd[1]: Started cri-containerd-169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846.scope - libcontainer container 169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846. Aug 13 07:13:12.167252 containerd[1469]: time="2025-08-13T07:13:12.166335710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5l6lh,Uid:2c51012b-f93a-4143-8128-6925ff4126f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846\"" Aug 13 07:13:12.194474 systemd[1]: run-netns-cni\x2dca8d7b94\x2d34d7\x2df2e1\x2d1809\x2d4325c4f2417c.mount: Deactivated successfully. Aug 13 07:13:12.713389 systemd-networkd[1362]: vxlan.calico: Link UP Aug 13 07:13:12.713398 systemd-networkd[1362]: vxlan.calico: Gained carrier Aug 13 07:13:12.866834 kubelet[2490]: E0813 07:13:12.866564 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:13.206848 systemd-networkd[1362]: caliec41609144f: Gained IPv6LL Aug 13 07:13:13.780234 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Aug 13 07:13:13.871089 kubelet[2490]: E0813 07:13:13.871022 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:14.821076 systemd[1]: Started sshd@7-137.184.36.62:22-139.178.89.65:43880.service - OpenSSH per-connection server daemon (139.178.89.65:43880). Aug 13 07:13:15.009553 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 43880 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:15.015818 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:15.031453 systemd-logind[1446]: New session 8 of user core. Aug 13 07:13:15.037502 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:13:15.155258 containerd[1469]: time="2025-08-13T07:13:15.155047462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:15.158915 containerd[1469]: time="2025-08-13T07:13:15.158828182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:13:15.160930 containerd[1469]: time="2025-08-13T07:13:15.159917141Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:15.165638 containerd[1469]: time="2025-08-13T07:13:15.165250033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:15.167071 containerd[1469]: time="2025-08-13T07:13:15.166822243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.656404992s" Aug 13 07:13:15.167071 containerd[1469]: time="2025-08-13T07:13:15.166888519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:13:15.169671 containerd[1469]: time="2025-08-13T07:13:15.169604637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:13:15.174555 containerd[1469]: time="2025-08-13T07:13:15.174074989Z" level=info msg="CreateContainer within sandbox \"94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:13:15.199471 containerd[1469]: time="2025-08-13T07:13:15.199388405Z" level=info msg="CreateContainer within sandbox \"94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"339d4d8eedec702927c30fec6dc87ee71546809b6951702a0a733e0e9c9c8e3f\"" Aug 13 07:13:15.204267 containerd[1469]: time="2025-08-13T07:13:15.202585257Z" level=info msg="StartContainer for \"339d4d8eedec702927c30fec6dc87ee71546809b6951702a0a733e0e9c9c8e3f\"" Aug 13 07:13:15.291612 systemd[1]: Started cri-containerd-339d4d8eedec702927c30fec6dc87ee71546809b6951702a0a733e0e9c9c8e3f.scope - libcontainer container 339d4d8eedec702927c30fec6dc87ee71546809b6951702a0a733e0e9c9c8e3f. Aug 13 07:13:15.433233 containerd[1469]: time="2025-08-13T07:13:15.433157123Z" level=info msg="StartContainer for \"339d4d8eedec702927c30fec6dc87ee71546809b6951702a0a733e0e9c9c8e3f\" returns successfully" Aug 13 07:13:15.830367 containerd[1469]: time="2025-08-13T07:13:15.830231572Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:15.832099 containerd[1469]: time="2025-08-13T07:13:15.831436082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:13:15.838675 containerd[1469]: time="2025-08-13T07:13:15.838082698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 668.409239ms" Aug 13 07:13:15.838675 containerd[1469]: time="2025-08-13T07:13:15.838154034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:13:15.840293 containerd[1469]: time="2025-08-13T07:13:15.840126099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:13:15.846103 containerd[1469]: time="2025-08-13T07:13:15.845680319Z" level=info msg="CreateContainer within sandbox \"3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:13:15.876540 containerd[1469]: time="2025-08-13T07:13:15.876484753Z" level=info msg="CreateContainer within sandbox \"3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c39692c554ddf7669a320c7d0e26d2c616bd8f74a6b70744dc972f33d12c4ce6\"" Aug 13 07:13:15.881084 containerd[1469]: time="2025-08-13T07:13:15.877840213Z" level=info msg="StartContainer for \"c39692c554ddf7669a320c7d0e26d2c616bd8f74a6b70744dc972f33d12c4ce6\"" Aug 13 07:13:15.894261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029672244.mount: Deactivated successfully. Aug 13 07:13:15.981162 kubelet[2490]: I0813 07:13:15.979106 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c665b6f4-jwqhs" podStartSLOduration=5.961122605 podStartE2EDuration="12.979079268s" podCreationTimestamp="2025-08-13 07:13:03 +0000 UTC" firstStartedPulling="2025-08-13 07:13:04.487715732 +0000 UTC m=+44.428174564" lastFinishedPulling="2025-08-13 07:13:11.505672379 +0000 UTC m=+51.446131227" observedRunningTime="2025-08-13 07:13:12.889971886 +0000 UTC m=+52.830430740" watchObservedRunningTime="2025-08-13 07:13:15.979079268 +0000 UTC m=+55.919538122" Aug 13 07:13:15.994556 sshd[4993]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:16.002955 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:13:16.013094 systemd[1]: sshd@7-137.184.36.62:22-139.178.89.65:43880.service: Deactivated successfully. Aug 13 07:13:16.019496 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:13:16.028145 systemd-logind[1446]: Removed session 8. Aug 13 07:13:16.089915 systemd[1]: Started cri-containerd-c39692c554ddf7669a320c7d0e26d2c616bd8f74a6b70744dc972f33d12c4ce6.scope - libcontainer container c39692c554ddf7669a320c7d0e26d2c616bd8f74a6b70744dc972f33d12c4ce6. Aug 13 07:13:16.447690 containerd[1469]: time="2025-08-13T07:13:16.447640269Z" level=info msg="StartContainer for \"c39692c554ddf7669a320c7d0e26d2c616bd8f74a6b70744dc972f33d12c4ce6\" returns successfully" Aug 13 07:13:16.960316 kubelet[2490]: I0813 07:13:16.960280 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:13:16.977258 kubelet[2490]: I0813 07:13:16.976417 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8bc87ff86-z99t9" podStartSLOduration=32.152905535 podStartE2EDuration="38.976384639s" podCreationTimestamp="2025-08-13 07:12:38 +0000 UTC" firstStartedPulling="2025-08-13 07:13:08.345313612 +0000 UTC m=+48.285772443" lastFinishedPulling="2025-08-13 07:13:15.168792716 +0000 UTC m=+55.109251547" observedRunningTime="2025-08-13 07:13:15.979959358 +0000 UTC m=+55.920418214" watchObservedRunningTime="2025-08-13 07:13:16.976384639 +0000 UTC m=+56.916843492" Aug 13 07:13:17.963026 kubelet[2490]: I0813 07:13:17.962822 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:13:19.322507 containerd[1469]: time="2025-08-13T07:13:19.322243228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:19.323900 containerd[1469]: time="2025-08-13T07:13:19.323485462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:13:19.323900 containerd[1469]: time="2025-08-13T07:13:19.323846523Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:19.326045 containerd[1469]: time="2025-08-13T07:13:19.325990125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:19.326922 containerd[1469]: time="2025-08-13T07:13:19.326894163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.486725081s" Aug 13 07:13:19.327250 containerd[1469]: time="2025-08-13T07:13:19.327031075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:13:19.328696 containerd[1469]: time="2025-08-13T07:13:19.328622813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:13:19.380722 containerd[1469]: time="2025-08-13T07:13:19.380609976Z" level=info msg="CreateContainer within sandbox \"ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:13:19.408317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779032479.mount: Deactivated successfully. Aug 13 07:13:19.419222 containerd[1469]: time="2025-08-13T07:13:19.419158685Z" level=info msg="CreateContainer within sandbox \"ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7b1fadd9219b1c16a196f1c7fd9741ff0ea1a470b874e9c110eacc1d6067b59b\"" Aug 13 07:13:19.421190 containerd[1469]: time="2025-08-13T07:13:19.421140503Z" level=info msg="StartContainer for \"7b1fadd9219b1c16a196f1c7fd9741ff0ea1a470b874e9c110eacc1d6067b59b\"" Aug 13 07:13:19.471483 systemd[1]: Started cri-containerd-7b1fadd9219b1c16a196f1c7fd9741ff0ea1a470b874e9c110eacc1d6067b59b.scope - libcontainer container 7b1fadd9219b1c16a196f1c7fd9741ff0ea1a470b874e9c110eacc1d6067b59b. Aug 13 07:13:19.533414 containerd[1469]: time="2025-08-13T07:13:19.533222439Z" level=info msg="StartContainer for \"7b1fadd9219b1c16a196f1c7fd9741ff0ea1a470b874e9c110eacc1d6067b59b\" returns successfully" Aug 13 07:13:20.005941 kubelet[2490]: I0813 07:13:20.004742 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d686c46f5-79swt" podStartSLOduration=27.191121446 podStartE2EDuration="37.00471357s" podCreationTimestamp="2025-08-13 07:12:43 +0000 UTC" firstStartedPulling="2025-08-13 07:13:09.514549465 +0000 UTC m=+49.455008312" lastFinishedPulling="2025-08-13 07:13:19.328141604 +0000 UTC m=+59.268600436" observedRunningTime="2025-08-13 07:13:20.003480918 +0000 UTC m=+59.943939774" watchObservedRunningTime="2025-08-13 07:13:20.00471357 +0000 UTC m=+59.945172424" Aug 13 07:13:20.005941 kubelet[2490]: I0813 07:13:20.004871 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8bc87ff86-87hxl" podStartSLOduration=34.561959218 podStartE2EDuration="42.004865101s" podCreationTimestamp="2025-08-13 07:12:38 +0000 UTC" firstStartedPulling="2025-08-13 07:13:08.397085981 +0000 UTC m=+48.337544821" lastFinishedPulling="2025-08-13 07:13:15.839991873 +0000 UTC m=+55.780450704" observedRunningTime="2025-08-13 07:13:16.977236237 +0000 UTC m=+56.917695090" watchObservedRunningTime="2025-08-13 07:13:20.004865101 +0000 UTC m=+59.945323953" Aug 13 07:13:20.483445 containerd[1469]: time="2025-08-13T07:13:20.483401782Z" level=info msg="StopPodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\"" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.702 [WARNING][5178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0", GenerateName:"calico-kube-controllers-6d686c46f5-", Namespace:"calico-system", SelfLink:"", UID:"60505575-c00e-488e-b314-2e2a9dc9b111", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d686c46f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7", Pod:"calico-kube-controllers-6d686c46f5-79swt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0943ae18732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.706 [INFO][5178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.706 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" iface="eth0" netns="" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.706 [INFO][5178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.706 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.869 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.871 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.871 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.886 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.886 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.889 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:20.895239 containerd[1469]: 2025-08-13 07:13:20.892 [INFO][5178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:20.912691 containerd[1469]: time="2025-08-13T07:13:20.912610934Z" level=info msg="TearDown network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" successfully" Aug 13 07:13:20.912691 containerd[1469]: time="2025-08-13T07:13:20.912662392Z" level=info msg="StopPodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" returns successfully" Aug 13 07:13:20.999935 containerd[1469]: time="2025-08-13T07:13:20.999864932Z" level=info msg="RemovePodSandbox for \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\"" Aug 13 07:13:20.999935 containerd[1469]: time="2025-08-13T07:13:20.999933722Z" level=info msg="Forcibly stopping sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\"" Aug 13 07:13:21.031652 systemd[1]: Started sshd@8-137.184.36.62:22-139.178.89.65:34294.service - OpenSSH per-connection server daemon (139.178.89.65:34294). Aug 13 07:13:21.225831 sshd[5201]: Accepted publickey for core from 139.178.89.65 port 34294 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:21.230721 sshd[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:21.248014 systemd-logind[1446]: New session 9 of user core. Aug 13 07:13:21.250447 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.151 [WARNING][5200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0", GenerateName:"calico-kube-controllers-6d686c46f5-", Namespace:"calico-system", SelfLink:"", UID:"60505575-c00e-488e-b314-2e2a9dc9b111", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d686c46f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"ef9d68be05257e6c341bfbe9cca581e51717f4d0a967450571840eacad7d0af7", Pod:"calico-kube-controllers-6d686c46f5-79swt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0943ae18732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.152 [INFO][5200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.152 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" iface="eth0" netns="" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.152 [INFO][5200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.153 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.231 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.232 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.232 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.269 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.270 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" HandleID="k8s-pod-network.9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--kube--controllers--6d686c46f5--79swt-eth0" Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.276 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:21.288251 containerd[1469]: 2025-08-13 07:13:21.281 [INFO][5200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7" Aug 13 07:13:21.288251 containerd[1469]: time="2025-08-13T07:13:21.287973522Z" level=info msg="TearDown network for sandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" successfully" Aug 13 07:13:21.305017 containerd[1469]: time="2025-08-13T07:13:21.304962537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:21.341531 containerd[1469]: time="2025-08-13T07:13:21.341038683Z" level=info msg="RemovePodSandbox \"9b1e439743eb4c4b05449c6907cf3cf521947f01dce569c322b6def98d9a90a7\" returns successfully" Aug 13 07:13:21.364806 containerd[1469]: time="2025-08-13T07:13:21.364379210Z" level=info msg="StopPodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\"" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.463 [WARNING][5230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d1571e3-4e3e-4f96-8193-1271f6b023c0", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53", Pod:"calico-apiserver-8bc87ff86-87hxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9b8398d2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.464 [INFO][5230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.464 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" iface="eth0" netns="" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.464 [INFO][5230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.464 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.526 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.527 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.527 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.545 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.546 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.550 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:21.570603 containerd[1469]: 2025-08-13 07:13:21.556 [INFO][5230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.570603 containerd[1469]: time="2025-08-13T07:13:21.570344605Z" level=info msg="TearDown network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" successfully" Aug 13 07:13:21.570603 containerd[1469]: time="2025-08-13T07:13:21.570376790Z" level=info msg="StopPodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" returns successfully" Aug 13 07:13:21.576350 containerd[1469]: time="2025-08-13T07:13:21.571272108Z" level=info msg="RemovePodSandbox for \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\"" Aug 13 07:13:21.576350 containerd[1469]: time="2025-08-13T07:13:21.571305670Z" level=info msg="Forcibly stopping sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\"" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.708 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d1571e3-4e3e-4f96-8193-1271f6b023c0", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"3ddbd6a3f988ca1c81dcad6ffbf24e3913f1d9fb6d9028e03c0d6b7aa28d1a53", Pod:"calico-apiserver-8bc87ff86-87hxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia9b8398d2af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.708 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.709 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" iface="eth0" netns="" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.709 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.709 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.883 [INFO][5261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.883 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.884 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.921 [WARNING][5261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.921 [INFO][5261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" HandleID="k8s-pod-network.b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--87hxl-eth0" Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.928 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:21.940258 containerd[1469]: 2025-08-13 07:13:21.934 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f" Aug 13 07:13:21.942701 containerd[1469]: time="2025-08-13T07:13:21.941338162Z" level=info msg="TearDown network for sandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" successfully" Aug 13 07:13:21.949209 containerd[1469]: time="2025-08-13T07:13:21.949131506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:21.949753 containerd[1469]: time="2025-08-13T07:13:21.949232247Z" level=info msg="RemovePodSandbox \"b551c20d465a8516e13e522d2853465fc25b42a341dac67f56af0ea6ad3a3a2f\" returns successfully" Aug 13 07:13:21.951463 containerd[1469]: time="2025-08-13T07:13:21.950983370Z" level=info msg="StopPodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\"" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.069 [WARNING][5276] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.073 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.073 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" iface="eth0" netns="" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.073 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.073 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.218 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.222 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.222 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.241 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.242 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.248 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:22.257851 containerd[1469]: 2025-08-13 07:13:22.254 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.258820 containerd[1469]: time="2025-08-13T07:13:22.258252852Z" level=info msg="TearDown network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" successfully" Aug 13 07:13:22.258820 containerd[1469]: time="2025-08-13T07:13:22.258285140Z" level=info msg="StopPodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" returns successfully" Aug 13 07:13:22.262050 containerd[1469]: time="2025-08-13T07:13:22.261423305Z" level=info msg="RemovePodSandbox for \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\"" Aug 13 07:13:22.262050 containerd[1469]: time="2025-08-13T07:13:22.261468168Z" level=info msg="Forcibly stopping sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\"" Aug 13 07:13:22.296312 sshd[5201]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:22.304514 systemd[1]: sshd@8-137.184.36.62:22-139.178.89.65:34294.service: Deactivated successfully. Aug 13 07:13:22.311705 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:13:22.318955 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:13:22.321231 systemd-logind[1446]: Removed session 9. Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.345 [WARNING][5297] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" WorkloadEndpoint="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.346 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.346 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" iface="eth0" netns="" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.346 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.346 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.393 [INFO][5307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.394 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.394 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.405 [WARNING][5307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.406 [INFO][5307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" HandleID="k8s-pod-network.a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-whisker--8dd745b58--c25pn-eth0" Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.408 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:22.416726 containerd[1469]: 2025-08-13 07:13:22.411 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca" Aug 13 07:13:22.416726 containerd[1469]: time="2025-08-13T07:13:22.416695606Z" level=info msg="TearDown network for sandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" successfully" Aug 13 07:13:22.425753 containerd[1469]: time="2025-08-13T07:13:22.425701024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:22.425893 containerd[1469]: time="2025-08-13T07:13:22.425789009Z" level=info msg="RemovePodSandbox \"a81203d76b57476b512320ea44acd4def4941873a376cd698dfd95a784c8b2ca\" returns successfully" Aug 13 07:13:22.426611 containerd[1469]: time="2025-08-13T07:13:22.426539313Z" level=info msg="StopPodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\"" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.476 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc", Pod:"coredns-7c65d6cfc9-gpfnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523df21e401", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.477 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.477 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" iface="eth0" netns="" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.477 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.477 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.516 [INFO][5328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.516 [INFO][5328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.516 [INFO][5328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.525 [WARNING][5328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.526 [INFO][5328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.528 [INFO][5328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:22.535816 containerd[1469]: 2025-08-13 07:13:22.532 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.537884 containerd[1469]: time="2025-08-13T07:13:22.535848169Z" level=info msg="TearDown network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" successfully" Aug 13 07:13:22.537884 containerd[1469]: time="2025-08-13T07:13:22.535879280Z" level=info msg="StopPodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" returns successfully" Aug 13 07:13:22.537884 containerd[1469]: time="2025-08-13T07:13:22.536718143Z" level=info msg="RemovePodSandbox for \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\"" Aug 13 07:13:22.537884 containerd[1469]: time="2025-08-13T07:13:22.536757900Z" level=info msg="Forcibly stopping sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\"" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.600 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7e60244f-7e5f-41c3-a2a5-c9680ee5ed60", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"5c3362c2a370cfb9de6d13a9dc5d31482bb59b2c44150945d08c3f422a2d0abc", Pod:"coredns-7c65d6cfc9-gpfnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali523df21e401", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.601 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.601 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" iface="eth0" netns="" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.601 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.601 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.634 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.634 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.634 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.646 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.646 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" HandleID="k8s-pod-network.9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--gpfnk-eth0" Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.650 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:22.658330 containerd[1469]: 2025-08-13 07:13:22.653 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e" Aug 13 07:13:22.658330 containerd[1469]: time="2025-08-13T07:13:22.657990549Z" level=info msg="TearDown network for sandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" successfully" Aug 13 07:13:22.671259 containerd[1469]: time="2025-08-13T07:13:22.671145942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:22.671419 containerd[1469]: time="2025-08-13T07:13:22.671300626Z" level=info msg="RemovePodSandbox \"9af91f64835e6296c5d0b9f881d04893b6a6a2043ca92f534493e825b891a21e\" returns successfully" Aug 13 07:13:22.672382 containerd[1469]: time="2025-08-13T07:13:22.672344825Z" level=info msg="StopPodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\"" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.768 [WARNING][5366] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c51012b-f93a-4143-8128-6925ff4126f6", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846", Pod:"csi-node-driver-5l6lh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec41609144f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.769 [INFO][5366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.769 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" iface="eth0" netns="" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.769 [INFO][5366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.769 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.810 [INFO][5374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.810 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.810 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.824 [WARNING][5374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.824 [INFO][5374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.827 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:22.832655 containerd[1469]: 2025-08-13 07:13:22.829 [INFO][5366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:22.834744 containerd[1469]: time="2025-08-13T07:13:22.833576970Z" level=info msg="TearDown network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" successfully" Aug 13 07:13:22.834744 containerd[1469]: time="2025-08-13T07:13:22.833638088Z" level=info msg="StopPodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" returns successfully" Aug 13 07:13:22.834908 containerd[1469]: time="2025-08-13T07:13:22.834846032Z" level=info msg="RemovePodSandbox for \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\"" Aug 13 07:13:22.834971 containerd[1469]: time="2025-08-13T07:13:22.834941016Z" level=info msg="Forcibly stopping sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\"" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.904 [WARNING][5388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c51012b-f93a-4143-8128-6925ff4126f6", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846", Pod:"csi-node-driver-5l6lh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliec41609144f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.906 [INFO][5388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.907 [INFO][5388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" iface="eth0" netns="" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.907 [INFO][5388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.907 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.976 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.977 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.977 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.993 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.993 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" HandleID="k8s-pod-network.2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-csi--node--driver--5l6lh-eth0" Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:22.999 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:23.010712 containerd[1469]: 2025-08-13 07:13:23.004 [INFO][5388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87" Aug 13 07:13:23.012452 containerd[1469]: time="2025-08-13T07:13:23.010759488Z" level=info msg="TearDown network for sandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" successfully" Aug 13 07:13:23.052614 containerd[1469]: time="2025-08-13T07:13:23.051415404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:23.052614 containerd[1469]: time="2025-08-13T07:13:23.051496179Z" level=info msg="RemovePodSandbox \"2dc1558430bba1ec5ec7a3c3dacebd267b16b75db5aa0a5000f8d3254a83ea87\" returns successfully" Aug 13 07:13:23.062222 containerd[1469]: time="2025-08-13T07:13:23.062181471Z" level=info msg="StopPodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\"" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.215 [WARNING][5409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"76bc530d-8815-4ca7-9b4b-bf84496266e1", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62", Pod:"calico-apiserver-8bc87ff86-z99t9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif426492695e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.215 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.216 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" iface="eth0" netns="" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.216 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.216 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.308 [INFO][5421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.308 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.308 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.323 [WARNING][5421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.323 [INFO][5421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.326 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:23.338828 containerd[1469]: 2025-08-13 07:13:23.330 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.340052 containerd[1469]: time="2025-08-13T07:13:23.338887519Z" level=info msg="TearDown network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" successfully" Aug 13 07:13:23.340052 containerd[1469]: time="2025-08-13T07:13:23.338929241Z" level=info msg="StopPodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" returns successfully" Aug 13 07:13:23.340162 containerd[1469]: time="2025-08-13T07:13:23.340057866Z" level=info msg="RemovePodSandbox for \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\"" Aug 13 07:13:23.340162 containerd[1469]: time="2025-08-13T07:13:23.340105796Z" level=info msg="Forcibly stopping sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\"" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.509 [WARNING][5435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0", GenerateName:"calico-apiserver-8bc87ff86-", Namespace:"calico-apiserver", SelfLink:"", UID:"76bc530d-8815-4ca7-9b4b-bf84496266e1", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8bc87ff86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"94c7949c4f0ebc15c1dc17f3686c56aed683a2495e860358eb5c5cabf9509d62", Pod:"calico-apiserver-8bc87ff86-z99t9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif426492695e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.509 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.509 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" iface="eth0" netns="" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.509 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.509 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.599 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.600 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.600 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.611 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.611 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" HandleID="k8s-pod-network.7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-calico--apiserver--8bc87ff86--z99t9-eth0" Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.614 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:23.634459 containerd[1469]: 2025-08-13 07:13:23.622 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055" Aug 13 07:13:23.634459 containerd[1469]: time="2025-08-13T07:13:23.633961642Z" level=info msg="TearDown network for sandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" successfully" Aug 13 07:13:23.646242 containerd[1469]: time="2025-08-13T07:13:23.646183571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:23.646717 containerd[1469]: time="2025-08-13T07:13:23.646464175Z" level=info msg="RemovePodSandbox \"7480db873343010758178e1c832a8facc299e29ebc8041a921f711b9069fc055\" returns successfully" Aug 13 07:13:23.652011 containerd[1469]: time="2025-08-13T07:13:23.651539250Z" level=info msg="StopPodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\"" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.768 [WARNING][5465] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bc80eeef-1a82-4810-afd6-bf52e91a865f", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3", Pod:"coredns-7c65d6cfc9-rjt66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d88052877e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.768 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.768 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" iface="eth0" netns="" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.768 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.768 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.843 [INFO][5472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.843 [INFO][5472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.843 [INFO][5472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.861 [WARNING][5472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.862 [INFO][5472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.866 [INFO][5472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:23.879502 containerd[1469]: 2025-08-13 07:13:23.873 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:23.881552 containerd[1469]: time="2025-08-13T07:13:23.880325814Z" level=info msg="TearDown network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" successfully" Aug 13 07:13:23.881552 containerd[1469]: time="2025-08-13T07:13:23.880530235Z" level=info msg="StopPodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" returns successfully" Aug 13 07:13:23.882298 containerd[1469]: time="2025-08-13T07:13:23.881875284Z" level=info msg="RemovePodSandbox for \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\"" Aug 13 07:13:23.882298 containerd[1469]: time="2025-08-13T07:13:23.881908264Z" level=info msg="Forcibly stopping sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\"" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.001 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bc80eeef-1a82-4810-afd6-bf52e91a865f", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"9ec2685a7dbcfa94faa74155a9f0d9d63d0d152bd38270a630e0a903b6f47ea3", Pod:"coredns-7c65d6cfc9-rjt66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d88052877e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.002 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.002 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" iface="eth0" netns="" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.002 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.002 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.081 [INFO][5502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.083 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.083 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.104 [WARNING][5502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.104 [INFO][5502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" HandleID="k8s-pod-network.9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-coredns--7c65d6cfc9--rjt66-eth0" Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.109 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:24.118902 containerd[1469]: 2025-08-13 07:13:24.114 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277" Aug 13 07:13:24.120728 containerd[1469]: time="2025-08-13T07:13:24.120079069Z" level=info msg="TearDown network for sandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" successfully" Aug 13 07:13:24.127662 containerd[1469]: time="2025-08-13T07:13:24.127152235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:24.127662 containerd[1469]: time="2025-08-13T07:13:24.127505516Z" level=info msg="RemovePodSandbox \"9390f362adec66621758fb2c58d8688376f42d1b7a78a4b33d2474df3d0d0277\" returns successfully" Aug 13 07:13:24.129496 containerd[1469]: time="2025-08-13T07:13:24.129452708Z" level=info msg="StopPodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\"" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.229 [WARNING][5525] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bac3cbef-457c-46f3-9f40-8d0637f6d8c5", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9", Pod:"goldmane-58fd7646b9-svvdd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide06c13e58e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.232 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.232 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" iface="eth0" netns="" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.232 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.232 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.306 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.307 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.307 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.320 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.320 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.324 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:24.331313 containerd[1469]: 2025-08-13 07:13:24.327 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.331313 containerd[1469]: time="2025-08-13T07:13:24.331188993Z" level=info msg="TearDown network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" successfully" Aug 13 07:13:24.331313 containerd[1469]: time="2025-08-13T07:13:24.331214297Z" level=info msg="StopPodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" returns successfully" Aug 13 07:13:24.334050 containerd[1469]: time="2025-08-13T07:13:24.332797585Z" level=info msg="RemovePodSandbox for \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\"" Aug 13 07:13:24.334050 containerd[1469]: time="2025-08-13T07:13:24.332835406Z" level=info msg="Forcibly stopping sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\"" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.394 [WARNING][5546] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"bac3cbef-457c-46f3-9f40-8d0637f6d8c5", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-2a2ab8bcea", ContainerID:"da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9", Pod:"goldmane-58fd7646b9-svvdd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide06c13e58e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.394 [INFO][5546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.394 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" iface="eth0" netns="" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.394 [INFO][5546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.395 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.438 [INFO][5553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.438 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.438 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.452 [WARNING][5553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.452 [INFO][5553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" HandleID="k8s-pod-network.c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Workload="ci--4081.3.5--a--2a2ab8bcea-k8s-goldmane--58fd7646b9--svvdd-eth0" Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.461 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:13:24.472565 containerd[1469]: 2025-08-13 07:13:24.468 [INFO][5546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b" Aug 13 07:13:24.473999 containerd[1469]: time="2025-08-13T07:13:24.472871958Z" level=info msg="TearDown network for sandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" successfully" Aug 13 07:13:24.479387 containerd[1469]: time="2025-08-13T07:13:24.479332073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:13:24.480431 containerd[1469]: time="2025-08-13T07:13:24.479850352Z" level=info msg="RemovePodSandbox \"c0d5c1bb5f79ba092f1bfe410fa311d13992fb070131f2edbd56db8de67c0d7b\" returns successfully" Aug 13 07:13:24.622656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151362004.mount: Deactivated successfully. Aug 13 07:13:25.622630 containerd[1469]: time="2025-08-13T07:13:25.525949758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:13:25.622630 containerd[1469]: time="2025-08-13T07:13:25.622251735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:25.672564 containerd[1469]: time="2025-08-13T07:13:25.672466709Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:25.673984 containerd[1469]: time="2025-08-13T07:13:25.673711201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.345052279s" Aug 13 07:13:25.673984 containerd[1469]: time="2025-08-13T07:13:25.673766338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:13:25.676734 containerd[1469]: time="2025-08-13T07:13:25.676620722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:25.740611 containerd[1469]: time="2025-08-13T07:13:25.740561034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:13:25.852687 containerd[1469]: time="2025-08-13T07:13:25.852483261Z" level=info msg="CreateContainer within sandbox \"da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:13:25.986615 containerd[1469]: time="2025-08-13T07:13:25.986552708Z" level=info msg="CreateContainer within sandbox \"da234285f882469d421c5578540558693eff458bdb306e2d6e75a77d5aa7a4a9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb\"" Aug 13 07:13:25.990510 containerd[1469]: time="2025-08-13T07:13:25.989565351Z" level=info msg="StartContainer for \"6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb\"" Aug 13 07:13:26.351517 systemd[1]: Started cri-containerd-6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb.scope - libcontainer container 6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb. Aug 13 07:13:26.445357 containerd[1469]: time="2025-08-13T07:13:26.444681029Z" level=info msg="StartContainer for \"6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb\" returns successfully" Aug 13 07:13:27.341758 systemd[1]: Started sshd@9-137.184.36.62:22-139.178.89.65:34298.service - OpenSSH per-connection server daemon (139.178.89.65:34298). Aug 13 07:13:27.631531 sshd[5607]: Accepted publickey for core from 139.178.89.65 port 34298 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:27.639054 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:27.666711 systemd-logind[1446]: New session 10 of user core. Aug 13 07:13:27.675540 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:13:27.705138 kubelet[2490]: I0813 07:13:27.700777 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-svvdd" podStartSLOduration=30.922822492999998 podStartE2EDuration="45.657883841s" podCreationTimestamp="2025-08-13 07:12:42 +0000 UTC" firstStartedPulling="2025-08-13 07:13:10.965125337 +0000 UTC m=+50.905584169" lastFinishedPulling="2025-08-13 07:13:25.700186658 +0000 UTC m=+65.640645517" observedRunningTime="2025-08-13 07:13:27.601492591 +0000 UTC m=+67.541951452" watchObservedRunningTime="2025-08-13 07:13:27.657883841 +0000 UTC m=+67.598342695" Aug 13 07:13:27.943137 containerd[1469]: time="2025-08-13T07:13:27.942949971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:27.946347 containerd[1469]: time="2025-08-13T07:13:27.945422928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:13:27.946347 containerd[1469]: time="2025-08-13T07:13:27.946239776Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:27.951090 containerd[1469]: time="2025-08-13T07:13:27.950085167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:27.952936 containerd[1469]: time="2025-08-13T07:13:27.952315738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.210163034s" Aug 13 07:13:27.952936 containerd[1469]: time="2025-08-13T07:13:27.952385501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:13:27.961381 containerd[1469]: time="2025-08-13T07:13:27.960064870Z" level=info msg="CreateContainer within sandbox \"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:13:28.006147 containerd[1469]: time="2025-08-13T07:13:28.005406130Z" level=info msg="CreateContainer within sandbox \"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c68bbb6097c056a262723733b8ea4a33f822b296d7f682d83e1a4f02affbd2b2\"" Aug 13 07:13:28.011082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074592812.mount: Deactivated successfully. Aug 13 07:13:28.028075 containerd[1469]: time="2025-08-13T07:13:28.026324655Z" level=info msg="StartContainer for \"c68bbb6097c056a262723733b8ea4a33f822b296d7f682d83e1a4f02affbd2b2\"" Aug 13 07:13:28.205368 systemd[1]: Started cri-containerd-c68bbb6097c056a262723733b8ea4a33f822b296d7f682d83e1a4f02affbd2b2.scope - libcontainer container c68bbb6097c056a262723733b8ea4a33f822b296d7f682d83e1a4f02affbd2b2. Aug 13 07:13:28.323504 containerd[1469]: time="2025-08-13T07:13:28.323454427Z" level=info msg="StartContainer for \"c68bbb6097c056a262723733b8ea4a33f822b296d7f682d83e1a4f02affbd2b2\" returns successfully" Aug 13 07:13:28.330563 containerd[1469]: time="2025-08-13T07:13:28.330311522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:13:28.610537 sshd[5607]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:28.622918 systemd[1]: sshd@9-137.184.36.62:22-139.178.89.65:34298.service: Deactivated successfully. Aug 13 07:13:28.627845 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:13:28.631039 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:13:28.641598 systemd[1]: Started sshd@10-137.184.36.62:22-139.178.89.65:34310.service - OpenSSH per-connection server daemon (139.178.89.65:34310). Aug 13 07:13:28.647025 systemd-logind[1446]: Removed session 10. Aug 13 07:13:28.749494 sshd[5678]: Accepted publickey for core from 139.178.89.65 port 34310 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:28.754318 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:28.763350 systemd-logind[1446]: New session 11 of user core. Aug 13 07:13:28.771498 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:13:28.999983 systemd[1]: run-containerd-runc-k8s.io-6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb-runc.haFTMC.mount: Deactivated successfully. Aug 13 07:13:29.061489 sshd[5678]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:29.079104 systemd[1]: sshd@10-137.184.36.62:22-139.178.89.65:34310.service: Deactivated successfully. Aug 13 07:13:29.085609 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:13:29.088626 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:13:29.097852 systemd[1]: Started sshd@11-137.184.36.62:22-139.178.89.65:45422.service - OpenSSH per-connection server daemon (139.178.89.65:45422). Aug 13 07:13:29.100406 systemd-logind[1446]: Removed session 11. Aug 13 07:13:29.165567 sshd[5695]: Accepted publickey for core from 139.178.89.65 port 45422 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:29.168917 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:29.180832 systemd-logind[1446]: New session 12 of user core. Aug 13 07:13:29.188522 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:13:29.344383 sshd[5695]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:29.351533 systemd[1]: sshd@11-137.184.36.62:22-139.178.89.65:45422.service: Deactivated successfully. Aug 13 07:13:29.355011 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:13:29.356713 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:13:29.357986 systemd-logind[1446]: Removed session 12. Aug 13 07:13:29.537916 systemd[1]: run-containerd-runc-k8s.io-6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb-runc.G6FENp.mount: Deactivated successfully. Aug 13 07:13:30.626813 containerd[1469]: time="2025-08-13T07:13:30.626729263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:30.628595 containerd[1469]: time="2025-08-13T07:13:30.628267473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:13:30.629576 containerd[1469]: time="2025-08-13T07:13:30.629251796Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:30.632299 containerd[1469]: time="2025-08-13T07:13:30.632252664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:13:30.634903 containerd[1469]: time="2025-08-13T07:13:30.634856855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.304498552s" Aug 13 07:13:30.635372 containerd[1469]: time="2025-08-13T07:13:30.635042603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:13:30.647014 containerd[1469]: time="2025-08-13T07:13:30.646970039Z" level=info msg="CreateContainer within sandbox \"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:13:30.685998 containerd[1469]: time="2025-08-13T07:13:30.685669294Z" level=info msg="CreateContainer within sandbox \"169a15f0fd0c8acbc520e3f708401e62cec28bb1ce65cf419a585e3f7528f846\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3ba47379773b2d70073c3ae6e474cbe8be041819b9471102e035fe3afa17baab\"" Aug 13 07:13:30.688113 containerd[1469]: time="2025-08-13T07:13:30.686658528Z" level=info msg="StartContainer for \"3ba47379773b2d70073c3ae6e474cbe8be041819b9471102e035fe3afa17baab\"" Aug 13 07:13:30.735386 systemd[1]: run-containerd-runc-k8s.io-3ba47379773b2d70073c3ae6e474cbe8be041819b9471102e035fe3afa17baab-runc.km2yCF.mount: Deactivated successfully. Aug 13 07:13:30.746529 systemd[1]: Started cri-containerd-3ba47379773b2d70073c3ae6e474cbe8be041819b9471102e035fe3afa17baab.scope - libcontainer container 3ba47379773b2d70073c3ae6e474cbe8be041819b9471102e035fe3afa17baab. Aug 13 07:13:30.794960 containerd[1469]: time="2025-08-13T07:13:30.794803118Z" level=info msg="StartContainer for \"3ba47379773b2d70073c3ae6e474cbe8be041819b9471102e035fe3afa17baab\" returns successfully" Aug 13 07:13:31.615027 kubelet[2490]: I0813 07:13:31.614936 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5l6lh" podStartSLOduration=31.157287787 podStartE2EDuration="49.614905876s" podCreationTimestamp="2025-08-13 07:12:42 +0000 UTC" firstStartedPulling="2025-08-13 07:13:12.179416706 +0000 UTC m=+52.119875556" lastFinishedPulling="2025-08-13 07:13:30.637034801 +0000 UTC m=+70.577493645" observedRunningTime="2025-08-13 07:13:31.614458198 +0000 UTC m=+71.554917053" watchObservedRunningTime="2025-08-13 07:13:31.614905876 +0000 UTC m=+71.555364731" Aug 13 07:13:31.667426 kubelet[2490]: I0813 07:13:31.667012 2490 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:13:31.676132 kubelet[2490]: I0813 07:13:31.676048 2490 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:13:33.291259 kubelet[2490]: E0813 07:13:33.291068 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:34.364685 systemd[1]: Started sshd@12-137.184.36.62:22-139.178.89.65:45426.service - OpenSSH per-connection server daemon (139.178.89.65:45426). Aug 13 07:13:34.511821 sshd[5782]: Accepted publickey for core from 139.178.89.65 port 45426 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:34.515098 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:34.522215 systemd-logind[1446]: New session 13 of user core. Aug 13 07:13:34.530451 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:13:34.937436 sshd[5782]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:34.944725 systemd[1]: sshd@12-137.184.36.62:22-139.178.89.65:45426.service: Deactivated successfully. Aug 13 07:13:34.948842 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:13:34.949864 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:13:34.951429 systemd-logind[1446]: Removed session 13. Aug 13 07:13:35.291515 kubelet[2490]: E0813 07:13:35.290858 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:38.286002 kubelet[2490]: E0813 07:13:38.285086 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:38.420316 kubelet[2490]: I0813 07:13:38.420256 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:13:39.963737 systemd[1]: Started sshd@13-137.184.36.62:22-139.178.89.65:43672.service - OpenSSH per-connection server daemon (139.178.89.65:43672). Aug 13 07:13:40.084319 sshd[5838]: Accepted publickey for core from 139.178.89.65 port 43672 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:40.088405 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:40.102241 systemd-logind[1446]: New session 14 of user core. Aug 13 07:13:40.107562 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:13:40.864472 sshd[5838]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:40.872982 systemd[1]: sshd@13-137.184.36.62:22-139.178.89.65:43672.service: Deactivated successfully. Aug 13 07:13:40.878291 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:13:40.880794 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:13:40.883310 systemd-logind[1446]: Removed session 14. Aug 13 07:13:45.881338 systemd[1]: Started sshd@14-137.184.36.62:22-139.178.89.65:43676.service - OpenSSH per-connection server daemon (139.178.89.65:43676). Aug 13 07:13:46.074226 sshd[5854]: Accepted publickey for core from 139.178.89.65 port 43676 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:46.078334 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:46.086583 systemd-logind[1446]: New session 15 of user core. Aug 13 07:13:46.096478 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:13:46.986980 sshd[5854]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:46.992419 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:13:46.993585 systemd[1]: sshd@14-137.184.36.62:22-139.178.89.65:43676.service: Deactivated successfully. Aug 13 07:13:47.000698 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:13:47.003865 systemd-logind[1446]: Removed session 15. Aug 13 07:13:50.553360 systemd[1]: run-containerd-runc-k8s.io-6158a5aeff499492ac4b8faa656c678fe0dee585c3698c455ff70dcdb53727fb-runc.skjYhj.mount: Deactivated successfully. Aug 13 07:13:52.010598 systemd[1]: Started sshd@15-137.184.36.62:22-139.178.89.65:54676.service - OpenSSH per-connection server daemon (139.178.89.65:54676). Aug 13 07:13:52.083456 sshd[5890]: Accepted publickey for core from 139.178.89.65 port 54676 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:52.085910 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:52.093339 systemd-logind[1446]: New session 16 of user core. Aug 13 07:13:52.098514 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:13:52.307332 kubelet[2490]: E0813 07:13:52.307019 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 07:13:52.331878 sshd[5890]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:52.343873 systemd[1]: sshd@15-137.184.36.62:22-139.178.89.65:54676.service: Deactivated successfully. Aug 13 07:13:52.346114 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:13:52.348417 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:13:52.356553 systemd[1]: Started sshd@16-137.184.36.62:22-139.178.89.65:54682.service - OpenSSH per-connection server daemon (139.178.89.65:54682). Aug 13 07:13:52.359638 systemd-logind[1446]: Removed session 16. Aug 13 07:13:52.405381 sshd[5903]: Accepted publickey for core from 139.178.89.65 port 54682 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:52.409232 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:52.418158 systemd-logind[1446]: New session 17 of user core. Aug 13 07:13:52.425488 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:13:52.752897 sshd[5903]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:52.761509 systemd[1]: sshd@16-137.184.36.62:22-139.178.89.65:54682.service: Deactivated successfully. Aug 13 07:13:52.764312 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:13:52.767065 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:13:52.772538 systemd[1]: Started sshd@17-137.184.36.62:22-139.178.89.65:54686.service - OpenSSH per-connection server daemon (139.178.89.65:54686). Aug 13 07:13:52.779647 systemd-logind[1446]: Removed session 17. Aug 13 07:13:52.852704 sshd[5914]: Accepted publickey for core from 139.178.89.65 port 54686 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:52.855031 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:52.860437 systemd-logind[1446]: New session 18 of user core. Aug 13 07:13:52.862376 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:13:53.918131 systemd[1]: run-containerd-runc-k8s.io-7b1fadd9219b1c16a196f1c7fd9741ff0ea1a470b874e9c110eacc1d6067b59b-runc.cFlhDm.mount: Deactivated successfully. Aug 13 07:13:55.228658 sshd[5914]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:55.253451 systemd[1]: sshd@17-137.184.36.62:22-139.178.89.65:54686.service: Deactivated successfully. Aug 13 07:13:55.260435 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:13:55.270567 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:13:55.276697 systemd[1]: Started sshd@18-137.184.36.62:22-139.178.89.65:54698.service - OpenSSH per-connection server daemon (139.178.89.65:54698). Aug 13 07:13:55.282823 systemd-logind[1446]: Removed session 18. Aug 13 07:13:55.421171 sshd[5983]: Accepted publickey for core from 139.178.89.65 port 54698 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:55.424509 sshd[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:55.432980 systemd-logind[1446]: New session 19 of user core. Aug 13 07:13:55.437291 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:13:56.399576 sshd[5983]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:56.416306 systemd[1]: Started sshd@19-137.184.36.62:22-139.178.89.65:54700.service - OpenSSH per-connection server daemon (139.178.89.65:54700). Aug 13 07:13:56.418398 systemd[1]: sshd@18-137.184.36.62:22-139.178.89.65:54698.service: Deactivated successfully. Aug 13 07:13:56.427056 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:13:56.436469 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:13:56.442213 systemd-logind[1446]: Removed session 19. Aug 13 07:13:56.523985 sshd[5996]: Accepted publickey for core from 139.178.89.65 port 54700 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:13:56.527060 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:13:56.533750 systemd-logind[1446]: New session 20 of user core. Aug 13 07:13:56.543514 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:13:56.715396 sshd[5996]: pam_unix(sshd:session): session closed for user core Aug 13 07:13:56.719559 systemd[1]: sshd@19-137.184.36.62:22-139.178.89.65:54700.service: Deactivated successfully. Aug 13 07:13:56.723678 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:13:56.724731 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:13:56.726982 systemd-logind[1446]: Removed session 20. Aug 13 07:14:01.745389 systemd[1]: Started sshd@20-137.184.36.62:22-139.178.89.65:49866.service - OpenSSH per-connection server daemon (139.178.89.65:49866). Aug 13 07:14:02.004489 sshd[6014]: Accepted publickey for core from 139.178.89.65 port 49866 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:02.008655 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:02.018139 systemd-logind[1446]: New session 21 of user core. Aug 13 07:14:02.024457 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:14:02.683607 sshd[6014]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:02.692238 systemd[1]: sshd@20-137.184.36.62:22-139.178.89.65:49866.service: Deactivated successfully. Aug 13 07:14:02.695728 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:14:02.697545 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:14:02.700732 systemd-logind[1446]: Removed session 21. Aug 13 07:14:07.716691 systemd[1]: Started sshd@21-137.184.36.62:22-139.178.89.65:49878.service - OpenSSH per-connection server daemon (139.178.89.65:49878). Aug 13 07:14:07.850203 sshd[6049]: Accepted publickey for core from 139.178.89.65 port 49878 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:07.851670 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:07.859914 systemd-logind[1446]: New session 22 of user core. Aug 13 07:14:07.869524 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:14:08.332481 sshd[6049]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:08.337030 systemd[1]: sshd@21-137.184.36.62:22-139.178.89.65:49878.service: Deactivated successfully. Aug 13 07:14:08.340682 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:14:08.342642 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:14:08.344476 systemd-logind[1446]: Removed session 22. Aug 13 07:14:13.358101 systemd[1]: Started sshd@22-137.184.36.62:22-139.178.89.65:39506.service - OpenSSH per-connection server daemon (139.178.89.65:39506). Aug 13 07:14:13.516294 sshd[6062]: Accepted publickey for core from 139.178.89.65 port 39506 ssh2: RSA SHA256:iBFkuKFiBB3BSalm/p74BBDVmtOBncY2PPcMGA081DM Aug 13 07:14:13.519673 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:14:13.531660 systemd-logind[1446]: New session 23 of user core. Aug 13 07:14:13.540677 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:14:14.513812 sshd[6062]: pam_unix(sshd:session): session closed for user core Aug 13 07:14:14.522117 systemd[1]: sshd@22-137.184.36.62:22-139.178.89.65:39506.service: Deactivated successfully. Aug 13 07:14:14.527198 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:14:14.533118 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:14:14.535246 systemd-logind[1446]: Removed session 23.