Sep 12 17:38:46.002738 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:38:46.002783 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:38:46.002807 kernel: BIOS-provided physical RAM map: Sep 12 17:38:46.002819 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:38:46.002830 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:38:46.002839 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:38:46.002847 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 17:38:46.002854 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 17:38:46.002861 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:38:46.002870 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:38:46.002877 kernel: NX (Execute Disable) protection: active Sep 12 17:38:46.002885 kernel: APIC: Static calls initialized Sep 12 17:38:46.002903 kernel: SMBIOS 2.8 present. Sep 12 17:38:46.002915 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 17:38:46.002932 kernel: Hypervisor detected: KVM Sep 12 17:38:46.002949 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:38:46.002968 kernel: kvm-clock: using sched offset of 3311336569 cycles Sep 12 17:38:46.002981 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:38:46.002994 kernel: tsc: Detected 2000.000 MHz processor Sep 12 17:38:46.003005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:38:46.003012 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:38:46.003020 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 17:38:46.003027 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:38:46.003035 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:38:46.003046 kernel: ACPI: Early table checksum verification disabled Sep 12 17:38:46.003054 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 17:38:46.003061 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003069 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003077 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003088 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 17:38:46.003106 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003151 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003163 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003179 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:46.003190 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 17:38:46.003200 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 17:38:46.003210 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 17:38:46.003219 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 17:38:46.003229 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 17:38:46.003239 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 17:38:46.003259 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 17:38:46.003271 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:38:46.003283 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:38:46.003296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:38:46.003309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:38:46.003326 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 12 17:38:46.003333 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 12 17:38:46.003345 kernel: Zone ranges: Sep 12 17:38:46.003353 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:38:46.003360 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 17:38:46.003368 kernel: Normal empty Sep 12 17:38:46.003376 kernel: Movable zone start for each node Sep 12 17:38:46.003383 kernel: Early memory node ranges Sep 12 17:38:46.003391 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:38:46.003399 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 17:38:46.003406 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 17:38:46.003417 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:38:46.003425 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:38:46.003436 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 17:38:46.003450 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:38:46.003462 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:38:46.003469 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:38:46.003477 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:38:46.003484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:38:46.003492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:38:46.003502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:38:46.003510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:38:46.003517 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:38:46.003525 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:38:46.003533 kernel: TSC deadline timer available Sep 12 17:38:46.003541 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:38:46.003548 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:38:46.003556 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 17:38:46.003567 kernel: Booting paravirtualized kernel on KVM Sep 12 17:38:46.003575 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:38:46.003588 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:38:46.003602 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:38:46.003617 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:38:46.003628 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:38:46.003637 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:38:46.003647 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:38:46.003657 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:38:46.003666 kernel: random: crng init done Sep 12 17:38:46.003678 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:38:46.003686 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:38:46.003698 kernel: Fallback order for Node 0: 0 Sep 12 17:38:46.003713 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 12 17:38:46.003724 kernel: Policy zone: DMA32 Sep 12 17:38:46.003736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:38:46.003748 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 12 17:38:46.003760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:38:46.003778 kernel: Kernel/User page tables isolation: enabled Sep 12 17:38:46.003789 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:38:46.003802 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:38:46.003814 kernel: Dynamic Preempt: voluntary Sep 12 17:38:46.003827 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:38:46.003853 kernel: rcu: RCU event tracing is enabled. Sep 12 17:38:46.003863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:38:46.003872 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:38:46.003885 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:38:46.003896 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:38:46.003909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:38:46.003917 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:38:46.003924 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:38:46.003931 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:38:46.003949 kernel: Console: colour VGA+ 80x25 Sep 12 17:38:46.003962 kernel: printk: console [tty0] enabled Sep 12 17:38:46.003975 kernel: printk: console [ttyS0] enabled Sep 12 17:38:46.003988 kernel: ACPI: Core revision 20230628 Sep 12 17:38:46.003996 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:38:46.004008 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:38:46.004015 kernel: x2apic enabled Sep 12 17:38:46.004023 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:38:46.004031 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:38:46.004039 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Sep 12 17:38:46.004047 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Sep 12 17:38:46.004054 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:38:46.004062 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:38:46.004082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:38:46.004090 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:38:46.004099 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:38:46.006194 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 17:38:46.006224 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:38:46.006234 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:38:46.006244 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:38:46.006253 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:38:46.006263 kernel: active return thunk: its_return_thunk Sep 12 17:38:46.006290 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:38:46.006305 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:38:46.006314 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:38:46.006324 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:38:46.006334 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:38:46.006343 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:38:46.006353 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:38:46.006363 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:38:46.006376 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:38:46.006385 kernel: landlock: Up and running. Sep 12 17:38:46.006398 kernel: SELinux: Initializing. Sep 12 17:38:46.006413 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:38:46.006422 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:38:46.006431 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 17:38:46.006441 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:38:46.006451 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:38:46.006461 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:38:46.006473 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 17:38:46.006483 kernel: signal: max sigframe size: 1776 Sep 12 17:38:46.006493 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:38:46.006503 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:38:46.006513 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:38:46.006525 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:38:46.006542 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:38:46.006558 kernel: .... node #0, CPUs: #1 Sep 12 17:38:46.006583 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:38:46.006602 kernel: smpboot: Max logical packages: 1 Sep 12 17:38:46.006617 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Sep 12 17:38:46.006632 kernel: devtmpfs: initialized Sep 12 17:38:46.006645 kernel: x86/mm: Memory block size: 128MB Sep 12 17:38:46.006658 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:38:46.006666 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:38:46.006675 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:38:46.006683 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:38:46.006692 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:38:46.006704 kernel: audit: type=2000 audit(1757698725.653:1): state=initialized audit_enabled=0 res=1 Sep 12 17:38:46.006714 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:38:46.006722 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:38:46.006731 kernel: cpuidle: using governor menu Sep 12 17:38:46.006739 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:38:46.006748 kernel: dca service started, version 1.12.1 Sep 12 17:38:46.006756 kernel: PCI: Using configuration type 1 for base access Sep 12 17:38:46.006765 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:38:46.006774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:38:46.006788 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:38:46.006803 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:38:46.006816 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:38:46.006829 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:38:46.006843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:38:46.006854 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:38:46.006865 kernel: ACPI: Interpreter enabled Sep 12 17:38:46.006881 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:38:46.006893 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:38:46.006911 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:38:46.006925 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:38:46.006936 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:38:46.006945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:38:46.007531 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:38:46.007752 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:38:46.007860 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:38:46.007877 kernel: acpiphp: Slot [3] registered Sep 12 17:38:46.007886 kernel: acpiphp: Slot [4] registered Sep 12 17:38:46.007895 kernel: acpiphp: Slot [5] registered Sep 12 17:38:46.007904 kernel: acpiphp: Slot [6] registered Sep 12 17:38:46.007912 kernel: acpiphp: Slot [7] registered Sep 12 17:38:46.007920 kernel: acpiphp: Slot [8] registered Sep 12 17:38:46.007928 kernel: acpiphp: Slot [9] registered Sep 12 17:38:46.007937 kernel: acpiphp: Slot [10] registered Sep 12 17:38:46.007945 kernel: acpiphp: Slot [11] registered Sep 12 17:38:46.007954 kernel: acpiphp: Slot [12] registered Sep 12 17:38:46.007965 kernel: acpiphp: Slot [13] registered Sep 12 17:38:46.007973 kernel: acpiphp: Slot [14] registered Sep 12 17:38:46.007982 kernel: acpiphp: Slot [15] registered Sep 12 17:38:46.007990 kernel: acpiphp: Slot [16] registered Sep 12 17:38:46.007998 kernel: acpiphp: Slot [17] registered Sep 12 17:38:46.008007 kernel: acpiphp: Slot [18] registered Sep 12 17:38:46.008015 kernel: acpiphp: Slot [19] registered Sep 12 17:38:46.008023 kernel: acpiphp: Slot [20] registered Sep 12 17:38:46.008031 kernel: acpiphp: Slot [21] registered Sep 12 17:38:46.008043 kernel: acpiphp: Slot [22] registered Sep 12 17:38:46.008051 kernel: acpiphp: Slot [23] registered Sep 12 17:38:46.008059 kernel: acpiphp: Slot [24] registered Sep 12 17:38:46.008067 kernel: acpiphp: Slot [25] registered Sep 12 17:38:46.008075 kernel: acpiphp: Slot [26] registered Sep 12 17:38:46.008083 kernel: acpiphp: Slot [27] registered Sep 12 17:38:46.008091 kernel: acpiphp: Slot [28] registered Sep 12 17:38:46.008099 kernel: acpiphp: Slot [29] registered Sep 12 17:38:46.010795 kernel: acpiphp: Slot [30] registered Sep 12 17:38:46.010835 kernel: acpiphp: Slot [31] registered Sep 12 17:38:46.010864 kernel: PCI host bridge to bus 0000:00 Sep 12 17:38:46.011213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:38:46.011372 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:38:46.011477 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:38:46.011616 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:38:46.011749 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 17:38:46.011872 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:38:46.012071 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:38:46.012238 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:38:46.012394 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 12 17:38:46.012552 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 12 17:38:46.012657 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:38:46.012756 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:38:46.012862 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:38:46.012959 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:38:46.013081 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 12 17:38:46.014339 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 12 17:38:46.014471 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:38:46.014570 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 17:38:46.014678 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 17:38:46.014845 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:38:46.015069 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 12 17:38:46.015273 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 17:38:46.015379 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 12 17:38:46.015485 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 12 17:38:46.015581 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:38:46.015712 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:38:46.015929 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 12 17:38:46.019303 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 12 17:38:46.019472 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 17:38:46.019618 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:38:46.019729 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 12 17:38:46.019830 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 12 17:38:46.019945 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 17:38:46.020066 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 12 17:38:46.020187 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 12 17:38:46.020306 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 12 17:38:46.020408 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 17:38:46.020528 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:38:46.020631 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:38:46.020739 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 12 17:38:46.020835 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 17:38:46.020951 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:38:46.021050 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 12 17:38:46.022266 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 12 17:38:46.022390 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 17:38:46.022504 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 12 17:38:46.022653 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 12 17:38:46.022753 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 17:38:46.022764 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:38:46.022773 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:38:46.022782 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:38:46.022791 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:38:46.022800 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:38:46.022813 kernel: iommu: Default domain type: Translated Sep 12 17:38:46.022822 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:38:46.022831 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:38:46.022839 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:38:46.022848 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:38:46.022856 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 17:38:46.022956 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 17:38:46.023053 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 17:38:46.024226 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:38:46.024248 kernel: vgaarb: loaded Sep 12 17:38:46.024257 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:38:46.024266 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:38:46.024275 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:38:46.024284 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:38:46.024293 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:38:46.024302 kernel: pnp: PnP ACPI init Sep 12 17:38:46.024310 kernel: pnp: PnP ACPI: found 4 devices Sep 12 17:38:46.024326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:38:46.024334 kernel: NET: Registered PF_INET protocol family Sep 12 17:38:46.024343 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:38:46.024352 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:38:46.024360 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:38:46.024369 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:38:46.024377 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:38:46.024386 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:38:46.024394 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:38:46.024406 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:38:46.024414 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:38:46.024422 kernel: NET: Registered PF_XDP protocol family Sep 12 17:38:46.024527 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:38:46.024615 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:38:46.024701 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:38:46.024789 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:38:46.024875 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 17:38:46.024978 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 17:38:46.025084 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:38:46.025096 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:38:46.026457 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 37439 usecs Sep 12 17:38:46.026479 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:38:46.026489 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:38:46.026498 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Sep 12 17:38:46.026507 kernel: Initialise system trusted keyrings Sep 12 17:38:46.026517 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:38:46.026532 kernel: Key type asymmetric registered Sep 12 17:38:46.026541 kernel: Asymmetric key parser 'x509' registered Sep 12 17:38:46.026549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:38:46.026589 kernel: io scheduler mq-deadline registered Sep 12 17:38:46.026598 kernel: io scheduler kyber registered Sep 12 17:38:46.026606 kernel: io scheduler bfq registered Sep 12 17:38:46.026615 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:38:46.026625 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 17:38:46.026633 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:38:46.026646 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:38:46.026655 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:38:46.026663 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:38:46.026672 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:38:46.026681 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:38:46.026689 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:38:46.026828 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:38:46.026842 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:38:46.026942 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:38:46.027035 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:38:45 UTC (1757698725) Sep 12 17:38:46.027202 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:38:46.027225 kernel: intel_pstate: CPU model not supported Sep 12 17:38:46.027238 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:38:46.027252 kernel: Segment Routing with IPv6 Sep 12 17:38:46.027266 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:38:46.027280 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:38:46.027289 kernel: Key type dns_resolver registered Sep 12 17:38:46.027305 kernel: IPI shorthand broadcast: enabled Sep 12 17:38:46.027314 kernel: sched_clock: Marking stable (1031005062, 133676270)->(1284764712, -120083380) Sep 12 17:38:46.027323 kernel: registered taskstats version 1 Sep 12 17:38:46.027331 kernel: Loading compiled-in X.509 certificates Sep 12 17:38:46.027340 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:38:46.027349 kernel: Key type .fscrypt registered Sep 12 17:38:46.027357 kernel: Key type fscrypt-provisioning registered Sep 12 17:38:46.027365 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:38:46.027377 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:38:46.027385 kernel: ima: No architecture policies found Sep 12 17:38:46.027393 kernel: clk: Disabling unused clocks Sep 12 17:38:46.027402 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:38:46.027411 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:38:46.027438 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:38:46.027449 kernel: Run /init as init process Sep 12 17:38:46.027458 kernel: with arguments: Sep 12 17:38:46.027467 kernel: /init Sep 12 17:38:46.027479 kernel: with environment: Sep 12 17:38:46.027487 kernel: HOME=/ Sep 12 17:38:46.027496 kernel: TERM=linux Sep 12 17:38:46.027504 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:38:46.027516 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:46.027528 systemd[1]: Detected virtualization kvm. Sep 12 17:38:46.027538 systemd[1]: Detected architecture x86-64. Sep 12 17:38:46.027546 systemd[1]: Running in initrd. Sep 12 17:38:46.027558 systemd[1]: No hostname configured, using default hostname. Sep 12 17:38:46.027567 systemd[1]: Hostname set to . Sep 12 17:38:46.027576 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:38:46.027585 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:38:46.027596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:46.027606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:46.027616 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:38:46.027625 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:46.027637 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:38:46.027646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:38:46.027657 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:38:46.027666 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:38:46.027676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:46.027684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:46.027693 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:46.027705 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:46.027714 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:46.027726 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:46.027735 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:46.027744 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:46.027756 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:38:46.027765 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:38:46.027774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:46.027783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:46.027792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:46.027801 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:46.027810 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:38:46.027819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:46.027828 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:38:46.027841 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:38:46.027850 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:46.027859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:46.027875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:46.027889 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:46.027947 systemd-journald[184]: Collecting audit messages is disabled. Sep 12 17:38:46.027983 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:46.027993 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:38:46.028003 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:38:46.028016 systemd-journald[184]: Journal started Sep 12 17:38:46.028039 systemd-journald[184]: Runtime Journal (/run/log/journal/f2b9978b4e4649a88932dd7dcdb70f6e) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:38:45.994828 systemd-modules-load[185]: Inserted module 'overlay' Sep 12 17:38:46.078472 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:46.078510 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:38:46.078539 kernel: Bridge firewalling registered Sep 12 17:38:46.056565 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 12 17:38:46.079594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:46.080519 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:46.081533 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:38:46.094342 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:46.097415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:46.105450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:46.109968 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:46.122890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:46.131825 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:46.138534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:46.146447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:46.147641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:46.154461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:38:46.173247 dracut-cmdline[218]: dracut-dracut-053 Sep 12 17:38:46.177631 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:38:46.196378 systemd-resolved[217]: Positive Trust Anchors: Sep 12 17:38:46.196406 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:46.196455 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:46.200829 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 12 17:38:46.204338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:46.205286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:46.282166 kernel: SCSI subsystem initialized Sep 12 17:38:46.296191 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:38:46.312172 kernel: iscsi: registered transport (tcp) Sep 12 17:38:46.339324 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:38:46.339439 kernel: QLogic iSCSI HBA Driver Sep 12 17:38:46.394479 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:46.404443 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:38:46.440339 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:38:46.440472 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:38:46.442151 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:38:46.497219 kernel: raid6: avx2x4 gen() 16136 MB/s Sep 12 17:38:46.515247 kernel: raid6: avx2x2 gen() 15684 MB/s Sep 12 17:38:46.533657 kernel: raid6: avx2x1 gen() 10888 MB/s Sep 12 17:38:46.533796 kernel: raid6: using algorithm avx2x4 gen() 16136 MB/s Sep 12 17:38:46.552584 kernel: raid6: .... xor() 6205 MB/s, rmw enabled Sep 12 17:38:46.552710 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:38:46.586172 kernel: xor: automatically using best checksumming function avx Sep 12 17:38:46.821165 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:38:46.839447 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:46.847475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:46.882921 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 12 17:38:46.888647 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:46.900409 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:38:46.918258 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 12 17:38:46.961750 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:46.970483 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:47.037545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:47.045602 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:38:47.071783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:47.075791 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:47.077453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:47.078799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:47.086401 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:38:47.119810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:47.148318 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:38:47.148458 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:38:47.158181 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 17:38:47.172310 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 17:38:47.201847 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:47.202015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:47.214986 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:38:47.215033 kernel: GPT:9289727 != 125829119 Sep 12 17:38:47.215051 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:38:47.215066 kernel: GPT:9289727 != 125829119 Sep 12 17:38:47.215081 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:38:47.215097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:47.215290 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 17:38:47.215545 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:38:47.211881 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:47.217261 kernel: AES CTR mode by8 optimization enabled Sep 12 17:38:47.212493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:47.212829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:47.213571 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:47.229375 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 12 17:38:47.227194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:47.239274 kernel: libata version 3.00 loaded. Sep 12 17:38:47.263437 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 17:38:47.275237 kernel: ACPI: bus type USB registered Sep 12 17:38:47.275302 kernel: usbcore: registered new interface driver usbfs Sep 12 17:38:47.275314 kernel: usbcore: registered new interface driver hub Sep 12 17:38:47.275325 kernel: usbcore: registered new device driver usb Sep 12 17:38:47.312146 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (450) Sep 12 17:38:47.326152 kernel: scsi host1: ata_piix Sep 12 17:38:47.339343 kernel: scsi host2: ata_piix Sep 12 17:38:47.340656 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 12 17:38:47.340724 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 12 17:38:47.347951 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:38:47.368001 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (457) Sep 12 17:38:47.370330 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:47.381673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:38:47.386274 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 17:38:47.386520 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 17:38:47.386651 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 17:38:47.386778 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 17:38:47.389131 kernel: hub 1-0:1.0: USB hub found Sep 12 17:38:47.389395 kernel: hub 1-0:1.0: 2 ports detected Sep 12 17:38:47.395359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:38:47.399389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:38:47.400061 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:38:47.411524 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:38:47.416356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:47.422444 disk-uuid[534]: Primary Header is updated. Sep 12 17:38:47.422444 disk-uuid[534]: Secondary Entries is updated. Sep 12 17:38:47.422444 disk-uuid[534]: Secondary Header is updated. Sep 12 17:38:47.427228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:47.433145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:47.448743 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:48.441150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:48.441244 disk-uuid[535]: The operation has completed successfully. Sep 12 17:38:48.487090 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:38:48.487440 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:38:48.506431 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:38:48.512603 sh[563]: Success Sep 12 17:38:48.531237 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:38:48.600282 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:38:48.603291 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:38:48.605327 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:38:48.627373 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:38:48.627492 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:48.630033 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:38:48.630143 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:38:48.632155 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:38:48.640657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:38:48.642490 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:38:48.652549 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:38:48.656556 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:38:48.666531 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:48.666631 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:48.668421 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:48.671232 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:48.690384 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:48.689962 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:38:48.699087 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:38:48.708546 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:38:48.854251 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:48.866481 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:48.888659 ignition[646]: Ignition 2.19.0 Sep 12 17:38:48.889791 ignition[646]: Stage: fetch-offline Sep 12 17:38:48.889900 ignition[646]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:48.889919 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:48.890184 ignition[646]: parsed url from cmdline: "" Sep 12 17:38:48.890196 ignition[646]: no config URL provided Sep 12 17:38:48.896853 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:48.890206 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:48.890224 ignition[646]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:48.890235 ignition[646]: failed to fetch config: resource requires networking Sep 12 17:38:48.894528 ignition[646]: Ignition finished successfully Sep 12 17:38:48.907731 systemd-networkd[751]: lo: Link UP Sep 12 17:38:48.907744 systemd-networkd[751]: lo: Gained carrier Sep 12 17:38:48.911309 systemd-networkd[751]: Enumeration completed Sep 12 17:38:48.912008 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:38:48.912023 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 17:38:48.912374 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:48.914722 systemd[1]: Reached target network.target - Network. Sep 12 17:38:48.915079 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:48.915084 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:48.916132 systemd-networkd[751]: eth0: Link UP Sep 12 17:38:48.916138 systemd-networkd[751]: eth0: Gained carrier Sep 12 17:38:48.916149 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:38:48.922986 systemd-networkd[751]: eth1: Link UP Sep 12 17:38:48.922999 systemd-networkd[751]: eth1: Gained carrier Sep 12 17:38:48.923020 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:48.923630 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:38:48.942003 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Sep 12 17:38:48.946360 systemd-networkd[751]: eth0: DHCPv4 address 159.223.204.96/20, gateway 159.223.192.1 acquired from 169.254.169.253 Sep 12 17:38:48.967093 ignition[756]: Ignition 2.19.0 Sep 12 17:38:48.968218 ignition[756]: Stage: fetch Sep 12 17:38:48.968572 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:48.968586 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:48.968720 ignition[756]: parsed url from cmdline: "" Sep 12 17:38:48.968724 ignition[756]: no config URL provided Sep 12 17:38:48.968730 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:48.968739 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:48.968764 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 17:38:49.012092 ignition[756]: GET result: OK Sep 12 17:38:49.012353 ignition[756]: parsing config with SHA512: dbd0412a937664230baba7e02daa571a0654cb6ebec66d9a492479496826816b0271bb1b081b930cea281cc422bc4daa57e35779295813652f93514f18e8a5a3 Sep 12 17:38:49.022788 unknown[756]: fetched base config from "system" Sep 12 17:38:49.023437 unknown[756]: fetched base config from "system" Sep 12 17:38:49.023976 ignition[756]: fetch: fetch complete Sep 12 17:38:49.023451 unknown[756]: fetched user config from "digitalocean" Sep 12 17:38:49.023982 ignition[756]: fetch: fetch passed Sep 12 17:38:49.026501 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:38:49.024060 ignition[756]: Ignition finished successfully Sep 12 17:38:49.036547 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:38:49.068972 ignition[763]: Ignition 2.19.0 Sep 12 17:38:49.068993 ignition[763]: Stage: kargs Sep 12 17:38:49.069365 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:49.069382 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:49.070929 ignition[763]: kargs: kargs passed Sep 12 17:38:49.071020 ignition[763]: Ignition finished successfully Sep 12 17:38:49.073746 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:38:49.080486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:38:49.100511 ignition[770]: Ignition 2.19.0 Sep 12 17:38:49.101668 ignition[770]: Stage: disks Sep 12 17:38:49.102454 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:49.103190 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:49.105768 ignition[770]: disks: disks passed Sep 12 17:38:49.105886 ignition[770]: Ignition finished successfully Sep 12 17:38:49.109349 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:38:49.114449 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:49.115431 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:38:49.116970 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:49.118495 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:49.119730 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:49.127547 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:38:49.147488 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:38:49.152030 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:38:49.167595 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:38:49.291301 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:38:49.292378 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:38:49.294527 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:49.303479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:49.307395 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:38:49.310456 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 12 17:38:49.319161 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (786) Sep 12 17:38:49.324309 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:49.324410 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:49.326187 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:49.327483 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:38:49.331269 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:38:49.331325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:49.349345 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:49.333480 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:38:49.340432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:49.350500 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:38:49.435538 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:38:49.448534 coreos-metadata[788]: Sep 12 17:38:49.448 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:49.452303 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:38:49.461966 coreos-metadata[789]: Sep 12 17:38:49.461 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:49.467288 coreos-metadata[788]: Sep 12 17:38:49.462 INFO Fetch successful Sep 12 17:38:49.469247 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:38:49.474144 coreos-metadata[789]: Sep 12 17:38:49.473 INFO Fetch successful Sep 12 17:38:49.478461 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 12 17:38:49.479231 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 12 17:38:49.483296 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:38:49.486236 coreos-metadata[789]: Sep 12 17:38:49.486 INFO wrote hostname ci-4081.3.6-9-2d91ca838a to /sysroot/etc/hostname Sep 12 17:38:49.487837 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:49.618931 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:49.624293 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:38:49.626377 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:38:49.642371 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:38:49.646168 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:49.672359 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:38:49.683614 ignition[906]: INFO : Ignition 2.19.0 Sep 12 17:38:49.683614 ignition[906]: INFO : Stage: mount Sep 12 17:38:49.686248 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:49.686248 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:49.686248 ignition[906]: INFO : mount: mount passed Sep 12 17:38:49.686248 ignition[906]: INFO : Ignition finished successfully Sep 12 17:38:49.687388 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:38:49.694369 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:38:49.726731 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:49.738213 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (918) Sep 12 17:38:49.738317 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:49.739341 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:49.741416 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:49.745162 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:49.747767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:49.796634 ignition[935]: INFO : Ignition 2.19.0 Sep 12 17:38:49.798978 ignition[935]: INFO : Stage: files Sep 12 17:38:49.798978 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:49.798978 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:49.805749 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:38:49.808175 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:38:49.809545 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:38:49.812668 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:38:49.813828 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:38:49.813828 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:38:49.813364 unknown[935]: wrote ssh authorized keys file for user: core Sep 12 17:38:49.817265 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:38:49.817265 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:38:49.817265 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:38:49.817265 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:38:49.864726 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:38:50.070473 systemd-networkd[751]: eth0: Gained IPv6LL Sep 12 17:38:50.836599 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:38:50.836599 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:38:50.841316 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:38:50.902634 systemd-networkd[751]: eth1: Gained IPv6LL Sep 12 17:38:51.291446 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:38:51.641271 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:38:51.641271 ignition[935]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:51.644473 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:51.644473 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:51.644473 ignition[935]: INFO : files: files passed Sep 12 17:38:51.644473 ignition[935]: INFO : Ignition finished successfully Sep 12 17:38:51.645923 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:38:51.654381 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:38:51.658371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:38:51.661677 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:38:51.661849 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:38:51.692543 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:51.692543 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:51.695352 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:51.697286 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:51.699325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:38:51.714545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:38:51.749615 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:38:51.749822 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:38:51.751411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:38:51.752371 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:38:51.753829 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:38:51.762486 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:38:51.782156 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:51.789369 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:38:51.814579 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:51.815337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:51.815954 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:38:51.816541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:38:51.816749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:51.818357 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:38:51.819289 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:38:51.820334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:38:51.821395 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:51.822858 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:51.824041 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:38:51.824939 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:51.826300 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:38:51.827906 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:38:51.829024 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:38:51.830161 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:38:51.830371 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:51.831970 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:51.832975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:51.834068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:38:51.834323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:51.835645 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:38:51.835811 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:51.837279 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:38:51.837421 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:51.838786 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:38:51.838920 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:38:51.839962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:38:51.840093 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:51.853990 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:38:51.857416 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:38:51.859481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:38:51.859715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:51.861844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:38:51.861966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:51.870510 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:38:51.870623 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:38:51.880936 ignition[987]: INFO : Ignition 2.19.0 Sep 12 17:38:51.887621 ignition[987]: INFO : Stage: umount Sep 12 17:38:51.887621 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:51.887621 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:51.892145 ignition[987]: INFO : umount: umount passed Sep 12 17:38:51.892145 ignition[987]: INFO : Ignition finished successfully Sep 12 17:38:51.892906 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:38:51.894205 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:38:51.895308 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:38:51.895374 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:38:51.898416 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:38:51.898505 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:38:51.899350 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:38:51.899410 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:38:51.901507 systemd[1]: Stopped target network.target - Network. Sep 12 17:38:51.902054 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:38:51.902144 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:51.903584 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:38:51.904175 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:38:51.904275 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:51.905240 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:38:51.905958 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:38:51.907279 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:38:51.907354 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:51.908285 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:38:51.908338 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:51.909429 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:38:51.909496 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:38:51.910580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:38:51.910633 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:51.912024 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:38:51.913494 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:38:51.916032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:38:51.916694 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:38:51.916802 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:38:51.917488 systemd-networkd[751]: eth0: DHCPv6 lease lost Sep 12 17:38:51.918519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:38:51.918645 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:51.921220 systemd-networkd[751]: eth1: DHCPv6 lease lost Sep 12 17:38:51.923933 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:38:51.924096 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:38:51.926024 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:38:51.926401 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:38:51.930072 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:38:51.930500 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:51.938368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:38:51.938959 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:38:51.939032 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:51.939924 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:38:51.939997 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:51.940854 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:38:51.940941 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:51.942423 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:38:51.942489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:51.944208 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:51.960584 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:38:51.960872 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:51.963788 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:38:51.963964 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:38:51.966481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:38:51.966616 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:51.968414 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:38:51.968473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:51.969979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:38:51.970064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:51.972359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:38:51.972437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:51.973758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:51.973831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:51.981514 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:38:51.982350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:38:51.982450 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:51.984635 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:38:51.984718 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:38:51.986757 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:38:51.986842 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:51.988558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:51.988646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:51.999282 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:38:51.999409 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:38:52.000957 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:38:52.009403 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:38:52.021208 systemd[1]: Switching root. Sep 12 17:38:52.067642 systemd-journald[184]: Journal stopped Sep 12 17:38:53.584853 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 12 17:38:53.584969 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:38:53.584994 kernel: SELinux: policy capability open_perms=1 Sep 12 17:38:53.585011 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:38:53.585028 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:38:53.585050 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:38:53.585067 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:38:53.585090 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:38:53.587239 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:38:53.587311 kernel: audit: type=1403 audit(1757698732.366:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:38:53.587340 systemd[1]: Successfully loaded SELinux policy in 52.150ms. Sep 12 17:38:53.587369 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.496ms. Sep 12 17:38:53.587391 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:53.587411 systemd[1]: Detected virtualization kvm. Sep 12 17:38:53.587431 systemd[1]: Detected architecture x86-64. Sep 12 17:38:53.587449 systemd[1]: Detected first boot. Sep 12 17:38:53.587475 systemd[1]: Hostname set to . Sep 12 17:38:53.587494 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:38:53.587511 zram_generator::config[1046]: No configuration found. Sep 12 17:38:53.587531 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:38:53.587550 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:38:53.587569 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:38:53.587601 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:38:53.587620 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:38:53.587642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:38:53.587659 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:38:53.587678 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:38:53.587694 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:38:53.587710 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:38:53.587736 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:38:53.587753 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:53.587771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:53.587789 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:38:53.587812 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:38:53.587830 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:38:53.587847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:53.587866 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:38:53.587883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:53.587900 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:38:53.587919 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:53.587941 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:53.587961 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:53.587980 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:53.588027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:38:53.588045 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:38:53.588063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:38:53.588081 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:38:53.588098 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:53.588223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:53.588249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:53.588266 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:38:53.588284 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:38:53.588302 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:38:53.588320 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:38:53.588338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:53.588356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:38:53.588375 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:38:53.588403 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:38:53.588421 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:38:53.588440 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:53.588457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:53.588475 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:38:53.588493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:53.588510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:53.588530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:53.588547 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:38:53.588568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:53.588588 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:38:53.588607 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:38:53.588628 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:38:53.588645 kernel: fuse: init (API version 7.39) Sep 12 17:38:53.588664 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:53.588682 kernel: loop: module loaded Sep 12 17:38:53.588712 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:53.588733 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:38:53.588751 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:38:53.588768 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:53.588788 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:53.593276 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:38:53.593316 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:38:53.593349 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:38:53.593369 kernel: ACPI: bus type drm_connector registered Sep 12 17:38:53.593438 systemd-journald[1141]: Collecting audit messages is disabled. Sep 12 17:38:53.593513 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:38:53.593535 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:38:53.593556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:38:53.593577 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:38:53.593596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:53.593615 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:38:53.593636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:38:53.593661 systemd-journald[1141]: Journal started Sep 12 17:38:53.593701 systemd-journald[1141]: Runtime Journal (/run/log/journal/f2b9978b4e4649a88932dd7dcdb70f6e) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:38:53.600227 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:53.602897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:53.603716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:53.604936 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:53.605500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:53.606710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:53.607067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:53.608693 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:38:53.608939 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:38:53.610230 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:53.610609 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:53.611908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:53.613460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:38:53.614948 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:38:53.633016 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:38:53.640310 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:38:53.648258 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:38:53.651335 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:38:53.667444 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:38:53.680598 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:38:53.681528 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:53.686388 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:38:53.689384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:53.704364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:53.717736 systemd-journald[1141]: Time spent on flushing to /var/log/journal/f2b9978b4e4649a88932dd7dcdb70f6e is 43.439ms for 973 entries. Sep 12 17:38:53.717736 systemd-journald[1141]: System Journal (/var/log/journal/f2b9978b4e4649a88932dd7dcdb70f6e) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:38:53.772079 systemd-journald[1141]: Received client request to flush runtime journal. Sep 12 17:38:53.720395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:38:53.736072 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:38:53.742312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:38:53.749676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:38:53.760258 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:38:53.774901 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:38:53.809818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:53.821375 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:38:53.830380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:53.845102 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Sep 12 17:38:53.845278 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Sep 12 17:38:53.855465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:38:53.870504 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:38:53.877186 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:38:53.925499 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:38:53.942420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:53.975455 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 17:38:53.976069 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 17:38:53.986820 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:54.721131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:38:54.727530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:54.770192 systemd-udevd[1217]: Using default interface naming scheme 'v255'. Sep 12 17:38:54.800292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:54.808477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:54.833350 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:38:54.909248 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:54.909402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:54.917366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:54.920330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:54.931342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:54.933259 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:38:54.933322 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:38:54.933373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:54.933775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:54.933961 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:54.939782 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:38:54.964936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:54.967285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:54.968272 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:38:54.969678 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:54.971743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:54.976957 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:54.977035 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:55.003145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1224) Sep 12 17:38:55.129921 systemd-networkd[1220]: lo: Link UP Sep 12 17:38:55.129934 systemd-networkd[1220]: lo: Gained carrier Sep 12 17:38:55.132855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:38:55.137003 systemd-networkd[1220]: Enumeration completed Sep 12 17:38:55.137528 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:55.151174 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 17:38:55.155459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:38:55.159770 systemd-networkd[1220]: eth0: Configuring with /run/systemd/network/10-9e:1b:e7:d7:44:63.network. Sep 12 17:38:55.160561 systemd-networkd[1220]: eth1: Configuring with /run/systemd/network/10-7e:cc:58:74:be:c9.network. Sep 12 17:38:55.161330 systemd-networkd[1220]: eth0: Link UP Sep 12 17:38:55.161339 systemd-networkd[1220]: eth0: Gained carrier Sep 12 17:38:55.165858 systemd-networkd[1220]: eth1: Link UP Sep 12 17:38:55.165876 systemd-networkd[1220]: eth1: Gained carrier Sep 12 17:38:55.197196 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:38:55.221148 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:38:55.223151 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:38:55.257146 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:38:55.281564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:55.301469 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 17:38:55.301564 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 17:38:55.308140 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:38:55.309834 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:38:55.309905 kernel: [drm] features: -context_init Sep 12 17:38:55.315152 kernel: [drm] number of scanouts: 1 Sep 12 17:38:55.315235 kernel: [drm] number of cap sets: 0 Sep 12 17:38:55.319144 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 12 17:38:55.337337 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:38:55.337435 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:38:55.337520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:55.340445 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:55.343777 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:38:55.378153 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:55.451685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:55.452060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:55.460472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:55.526151 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:38:55.545599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:55.554788 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:38:55.567571 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:38:55.587197 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:55.623680 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:38:55.624814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:55.631519 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:38:55.640579 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:55.673669 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:38:55.675600 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:38:55.686351 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 17:38:55.686510 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:38:55.686563 systemd[1]: Reached target machines.target - Containers. Sep 12 17:38:55.689329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:38:55.709227 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 17:38:55.709480 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 17:38:55.710752 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:55.713926 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:38:55.719417 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:38:55.728019 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:38:55.728747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:55.734494 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:38:55.752447 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:38:55.760526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:38:55.761921 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:38:55.793277 kernel: loop0: detected capacity change from 0 to 140768 Sep 12 17:38:55.802628 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:38:55.811311 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:38:55.833273 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:38:55.857156 kernel: loop1: detected capacity change from 0 to 8 Sep 12 17:38:55.890166 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:38:55.945211 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:38:55.985805 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 17:38:56.030182 kernel: loop5: detected capacity change from 0 to 8 Sep 12 17:38:56.036407 kernel: loop6: detected capacity change from 0 to 142488 Sep 12 17:38:56.061604 kernel: loop7: detected capacity change from 0 to 221472 Sep 12 17:38:56.075650 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 17:38:56.076326 (sd-merge)[1314]: Merged extensions into '/usr'. Sep 12 17:38:56.101026 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:38:56.101055 systemd[1]: Reloading... Sep 12 17:38:56.237164 zram_generator::config[1342]: No configuration found. Sep 12 17:38:56.437935 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:38:56.441508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:56.470495 systemd-networkd[1220]: eth1: Gained IPv6LL Sep 12 17:38:56.522191 systemd[1]: Reloading finished in 420 ms. Sep 12 17:38:56.544675 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:38:56.549351 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:38:56.552099 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:38:56.574563 systemd[1]: Starting ensure-sysext.service... Sep 12 17:38:56.585510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:56.595550 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:38:56.595578 systemd[1]: Reloading... Sep 12 17:38:56.636727 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:38:56.637602 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:38:56.638601 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:38:56.639263 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Sep 12 17:38:56.639431 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Sep 12 17:38:56.643565 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:56.643730 systemd-tmpfiles[1395]: Skipping /boot Sep 12 17:38:56.659771 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:56.659958 systemd-tmpfiles[1395]: Skipping /boot Sep 12 17:38:56.663458 systemd-networkd[1220]: eth0: Gained IPv6LL Sep 12 17:38:56.709161 zram_generator::config[1422]: No configuration found. Sep 12 17:38:56.880764 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:56.956823 systemd[1]: Reloading finished in 360 ms. Sep 12 17:38:56.987155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:57.014728 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:38:57.022396 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:38:57.034053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:38:57.040379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:57.050417 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:38:57.074717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:57.074914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:57.081439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:57.095497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:57.116892 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:57.121413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:57.121877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:57.127873 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:38:57.146629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:57.146885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:57.153933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:57.154233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:57.161757 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:57.163367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:57.180780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:57.182760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:57.195430 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:38:57.204534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:38:57.206389 augenrules[1508]: No rules Sep 12 17:38:57.213545 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:38:57.219234 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:38:57.237027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:57.237330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:57.247642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:57.262635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:57.276659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:57.281139 systemd-resolved[1483]: Positive Trust Anchors: Sep 12 17:38:57.281170 systemd-resolved[1483]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:57.281224 systemd-resolved[1483]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:57.285711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:57.286063 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:57.286263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:57.291567 systemd-resolved[1483]: Using system hostname 'ci-4081.3.6-9-2d91ca838a'. Sep 12 17:38:57.296134 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:38:57.298941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:57.304258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:57.304557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:57.309851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:57.310222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:57.320672 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:57.321298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:57.329391 systemd[1]: Reached target network.target - Network. Sep 12 17:38:57.331015 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:38:57.335809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:57.336544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:57.336841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:57.343609 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:57.362612 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:57.375261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:57.377931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:57.378028 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:57.378070 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:57.378897 systemd[1]: Finished ensure-sysext.service. Sep 12 17:38:57.385624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:57.385934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:57.389685 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:57.389887 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:57.393206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:57.393494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:57.399759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:57.399830 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:57.405485 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:38:57.480554 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:38:57.482549 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:57.483981 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:38:57.485123 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:38:57.485705 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:38:57.489149 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:38:57.489225 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:57.489796 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:38:57.490709 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:38:57.494165 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:38:57.494887 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:57.499953 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:38:57.503783 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:38:57.508524 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:38:57.511561 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:38:57.512254 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:57.512781 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:57.515768 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:38:57.516267 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:57.516312 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:57.522321 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:38:57.538418 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:38:58.842151 systemd-resolved[1483]: Clock change detected. Flushing caches. Sep 12 17:38:58.842236 systemd-timesyncd[1542]: Contacted time server 172.232.15.202:123 (0.flatcar.pool.ntp.org). Sep 12 17:38:58.842328 systemd-timesyncd[1542]: Initial clock synchronization to Fri 2025-09-12 17:38:58.842033 UTC. Sep 12 17:38:58.852601 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:38:58.859068 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:38:58.872108 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:38:58.874637 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:38:58.889091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:58.897952 jq[1550]: false Sep 12 17:38:58.901170 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:38:58.899811 dbus-daemon[1549]: [system] SELinux support is enabled Sep 12 17:38:58.920037 extend-filesystems[1553]: Found loop4 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found loop5 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found loop6 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found loop7 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda1 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda2 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda3 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found usr Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda4 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda6 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda7 Sep 12 17:38:58.924605 extend-filesystems[1553]: Found vda9 Sep 12 17:38:58.924605 extend-filesystems[1553]: Checking size of /dev/vda9 Sep 12 17:38:58.923153 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:38:58.944752 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:38:58.964202 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:38:58.970071 coreos-metadata[1547]: Sep 12 17:38:58.970 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:58.978099 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:38:58.995228 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:38:58.996075 coreos-metadata[1547]: Sep 12 17:38:58.996 INFO Fetch successful Sep 12 17:38:58.998006 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:38:59.002307 extend-filesystems[1553]: Resized partition /dev/vda9 Sep 12 17:38:59.011184 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:38:59.022597 extend-filesystems[1577]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:38:59.031888 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 17:38:59.032004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:38:59.035535 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:38:59.076978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1232) Sep 12 17:38:59.092705 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:38:59.093136 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:38:59.099169 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:38:59.099647 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:38:59.108828 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:38:59.118628 jq[1581]: true Sep 12 17:38:59.121744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:38:59.128330 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:38:59.183232 jq[1595]: true Sep 12 17:38:59.189666 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:38:59.216888 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 17:38:59.223251 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:38:59.262141 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:38:59.272105 tar[1593]: linux-amd64/helm Sep 12 17:38:59.272400 update_engine[1578]: I20250912 17:38:59.235672 1578 main.cc:92] Flatcar Update Engine starting Sep 12 17:38:59.272400 update_engine[1578]: I20250912 17:38:59.248261 1578 update_check_scheduler.cc:74] Next update check in 9m8s Sep 12 17:38:59.264804 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:38:59.265980 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:38:59.266036 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:38:59.267344 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:38:59.267419 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 17:38:59.267442 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:38:59.270543 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:38:59.288624 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:38:59.288624 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 17:38:59.288624 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 17:38:59.278273 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:38:59.312756 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Sep 12 17:38:59.312756 extend-filesystems[1553]: Found vdb Sep 12 17:38:59.292829 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:38:59.296310 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:38:59.370081 systemd-logind[1576]: New seat seat0. Sep 12 17:38:59.381795 systemd-logind[1576]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:38:59.382936 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:38:59.383235 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:38:59.442884 bash[1637]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:38:59.456454 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:38:59.476228 systemd[1]: Starting sshkeys.service... Sep 12 17:38:59.579708 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:38:59.591619 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:38:59.646467 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:38:59.725106 coreos-metadata[1650]: Sep 12 17:38:59.725 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:59.742092 coreos-metadata[1650]: Sep 12 17:38:59.741 INFO Fetch successful Sep 12 17:38:59.783532 unknown[1650]: wrote ssh authorized keys file for user: core Sep 12 17:38:59.856050 update-ssh-keys[1659]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:38:59.857451 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:38:59.861674 systemd[1]: Finished sshkeys.service. Sep 12 17:38:59.913484 sshd_keygen[1608]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:39:00.012122 containerd[1597]: time="2025-09-12T17:39:00.009994445Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:39:00.027381 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:39:00.049555 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:39:00.085770 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:39:00.087353 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:39:00.107385 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:39:00.141387 containerd[1597]: time="2025-09-12T17:39:00.141068673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.145926 containerd[1597]: time="2025-09-12T17:39:00.145825413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:39:00.145926 containerd[1597]: time="2025-09-12T17:39:00.145918314Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:39:00.145926 containerd[1597]: time="2025-09-12T17:39:00.145943879Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:39:00.146508 containerd[1597]: time="2025-09-12T17:39:00.146204582Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:39:00.146508 containerd[1597]: time="2025-09-12T17:39:00.146236779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146508 containerd[1597]: time="2025-09-12T17:39:00.146309823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146508 containerd[1597]: time="2025-09-12T17:39:00.146323808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146680 containerd[1597]: time="2025-09-12T17:39:00.146622999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146680 containerd[1597]: time="2025-09-12T17:39:00.146647024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146680 containerd[1597]: time="2025-09-12T17:39:00.146661811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146680 containerd[1597]: time="2025-09-12T17:39:00.146674623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.146805 containerd[1597]: time="2025-09-12T17:39:00.146788684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.148322 containerd[1597]: time="2025-09-12T17:39:00.147101040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:39:00.149187 containerd[1597]: time="2025-09-12T17:39:00.149147051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:39:00.149187 containerd[1597]: time="2025-09-12T17:39:00.149184226Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:39:00.149421 containerd[1597]: time="2025-09-12T17:39:00.149340227Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:39:00.149497 containerd[1597]: time="2025-09-12T17:39:00.149426568Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:39:00.164435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:39:00.177833 containerd[1597]: time="2025-09-12T17:39:00.177746583Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:39:00.178020 containerd[1597]: time="2025-09-12T17:39:00.177957467Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:39:00.178020 containerd[1597]: time="2025-09-12T17:39:00.177983557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:39:00.178020 containerd[1597]: time="2025-09-12T17:39:00.178002543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:39:00.178148 containerd[1597]: time="2025-09-12T17:39:00.178035062Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:39:00.180460 containerd[1597]: time="2025-09-12T17:39:00.180380102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:39:00.181922 containerd[1597]: time="2025-09-12T17:39:00.181370871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:39:00.181922 containerd[1597]: time="2025-09-12T17:39:00.181686252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:39:00.181922 containerd[1597]: time="2025-09-12T17:39:00.181715842Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:39:00.181922 containerd[1597]: time="2025-09-12T17:39:00.181757977Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:39:00.181922 containerd[1597]: time="2025-09-12T17:39:00.181797634Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.182499 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.186218369Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.188428503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.188495635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.188762341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.188804014Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.188840011Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190396350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190512283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190562663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190628461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190666208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190710132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190733497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.191892 containerd[1597]: time="2025-09-12T17:39:00.190753138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.190791050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.190824301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.190943588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.190988268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.191007708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.191047330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.192456 containerd[1597]: time="2025-09-12T17:39:00.191076731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:39:00.192989 containerd[1597]: time="2025-09-12T17:39:00.191614483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.193310 containerd[1597]: time="2025-09-12T17:39:00.193097845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.193310 containerd[1597]: time="2025-09-12T17:39:00.193123971Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:39:00.193310 containerd[1597]: time="2025-09-12T17:39:00.193269880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193300373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193432849Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193448347Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193459437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193491995Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193504996Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:39:00.193533 containerd[1597]: time="2025-09-12T17:39:00.193516040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:39:00.197349 containerd[1597]: time="2025-09-12T17:39:00.195571072Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:39:00.197349 containerd[1597]: time="2025-09-12T17:39:00.195990303Z" level=info msg="Connect containerd service" Sep 12 17:39:00.197349 containerd[1597]: time="2025-09-12T17:39:00.196089787Z" level=info msg="using legacy CRI server" Sep 12 17:39:00.197349 containerd[1597]: time="2025-09-12T17:39:00.196115598Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:39:00.197968 containerd[1597]: time="2025-09-12T17:39:00.197919043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:39:00.198955 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:39:00.201549 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:39:00.204065 containerd[1597]: time="2025-09-12T17:39:00.203950335Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:39:00.204473 containerd[1597]: time="2025-09-12T17:39:00.204365656Z" level=info msg="Start subscribing containerd event" Sep 12 17:39:00.204473 containerd[1597]: time="2025-09-12T17:39:00.204451900Z" level=info msg="Start recovering state" Sep 12 17:39:00.204632 containerd[1597]: time="2025-09-12T17:39:00.204576968Z" level=info msg="Start event monitor" Sep 12 17:39:00.204632 containerd[1597]: time="2025-09-12T17:39:00.204598224Z" level=info msg="Start snapshots syncer" Sep 12 17:39:00.204632 containerd[1597]: time="2025-09-12T17:39:00.204609951Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:39:00.204632 containerd[1597]: time="2025-09-12T17:39:00.204622201Z" level=info msg="Start streaming server" Sep 12 17:39:00.210737 containerd[1597]: time="2025-09-12T17:39:00.208439114Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:39:00.210737 containerd[1597]: time="2025-09-12T17:39:00.208559341Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:39:00.211632 containerd[1597]: time="2025-09-12T17:39:00.210936346Z" level=info msg="containerd successfully booted in 0.207387s" Sep 12 17:39:00.219731 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:39:00.442494 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:39:00.455443 systemd[1]: Started sshd@0-159.223.204.96:22-147.75.109.163:44208.service - OpenSSH per-connection server daemon (147.75.109.163:44208). Sep 12 17:39:00.643743 tar[1593]: linux-amd64/LICENSE Sep 12 17:39:00.644347 tar[1593]: linux-amd64/README.md Sep 12 17:39:00.644388 sshd[1690]: Accepted publickey for core from 147.75.109.163 port 44208 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:00.651153 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:00.684631 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:39:00.709438 systemd-logind[1576]: New session 1 of user core. Sep 12 17:39:00.710509 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:39:00.724442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:39:00.776505 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:39:00.787661 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:39:00.812413 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:39:00.970919 systemd[1701]: Queued start job for default target default.target. Sep 12 17:39:00.975126 systemd[1701]: Created slice app.slice - User Application Slice. Sep 12 17:39:00.975179 systemd[1701]: Reached target paths.target - Paths. Sep 12 17:39:00.975204 systemd[1701]: Reached target timers.target - Timers. Sep 12 17:39:00.987108 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:39:01.003581 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:39:01.007702 systemd[1701]: Reached target sockets.target - Sockets. Sep 12 17:39:01.007803 systemd[1701]: Reached target basic.target - Basic System. Sep 12 17:39:01.007941 systemd[1701]: Reached target default.target - Main User Target. Sep 12 17:39:01.007988 systemd[1701]: Startup finished in 185ms. Sep 12 17:39:01.008255 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:39:01.039144 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:39:01.126403 systemd[1]: Started sshd@1-159.223.204.96:22-147.75.109.163:44214.service - OpenSSH per-connection server daemon (147.75.109.163:44214). Sep 12 17:39:01.223998 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 44214 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:01.228429 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:01.241194 systemd-logind[1576]: New session 2 of user core. Sep 12 17:39:01.257655 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:39:01.344511 sshd[1713]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:01.356390 systemd[1]: Started sshd@2-159.223.204.96:22-147.75.109.163:44222.service - OpenSSH per-connection server daemon (147.75.109.163:44222). Sep 12 17:39:01.369553 systemd[1]: sshd@1-159.223.204.96:22-147.75.109.163:44214.service: Deactivated successfully. Sep 12 17:39:01.382474 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:39:01.388128 systemd-logind[1576]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:39:01.397179 systemd-logind[1576]: Removed session 2. Sep 12 17:39:01.467710 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 44222 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:01.477701 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:01.492202 systemd-logind[1576]: New session 3 of user core. Sep 12 17:39:01.503247 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:39:01.648312 sshd[1718]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:01.669675 systemd[1]: sshd@2-159.223.204.96:22-147.75.109.163:44222.service: Deactivated successfully. Sep 12 17:39:01.695159 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:39:01.702697 systemd-logind[1576]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:39:01.712247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:01.724136 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:39:01.736589 systemd[1]: Startup finished in 7.815s (kernel) + 8.125s (userspace) = 15.940s. Sep 12 17:39:01.741363 systemd-logind[1576]: Removed session 3. Sep 12 17:39:01.756574 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:02.857691 kubelet[1737]: E0912 17:39:02.857504 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:02.865715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:02.866769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:11.651680 systemd[1]: Started sshd@3-159.223.204.96:22-147.75.109.163:38350.service - OpenSSH per-connection server daemon (147.75.109.163:38350). Sep 12 17:39:11.699958 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 38350 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:11.702080 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:11.708155 systemd-logind[1576]: New session 4 of user core. Sep 12 17:39:11.715455 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:39:11.781167 sshd[1751]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:11.788279 systemd[1]: Started sshd@4-159.223.204.96:22-147.75.109.163:38364.service - OpenSSH per-connection server daemon (147.75.109.163:38364). Sep 12 17:39:11.789573 systemd[1]: sshd@3-159.223.204.96:22-147.75.109.163:38350.service: Deactivated successfully. Sep 12 17:39:11.796138 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:39:11.797088 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:39:11.800824 systemd-logind[1576]: Removed session 4. Sep 12 17:39:11.845280 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 38364 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:11.847553 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:11.855803 systemd-logind[1576]: New session 5 of user core. Sep 12 17:39:11.861533 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:39:11.922172 sshd[1756]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:11.933445 systemd[1]: Started sshd@5-159.223.204.96:22-147.75.109.163:38370.service - OpenSSH per-connection server daemon (147.75.109.163:38370). Sep 12 17:39:11.935624 systemd[1]: sshd@4-159.223.204.96:22-147.75.109.163:38364.service: Deactivated successfully. Sep 12 17:39:11.940177 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:39:11.941968 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:39:11.945484 systemd-logind[1576]: Removed session 5. Sep 12 17:39:11.996122 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 38370 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:11.998583 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:12.005108 systemd-logind[1576]: New session 6 of user core. Sep 12 17:39:12.013532 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:39:12.083532 sshd[1764]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:12.093374 systemd[1]: Started sshd@6-159.223.204.96:22-147.75.109.163:38378.service - OpenSSH per-connection server daemon (147.75.109.163:38378). Sep 12 17:39:12.094055 systemd[1]: sshd@5-159.223.204.96:22-147.75.109.163:38370.service: Deactivated successfully. Sep 12 17:39:12.097079 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:39:12.105390 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:39:12.106821 systemd-logind[1576]: Removed session 6. Sep 12 17:39:12.155207 sshd[1772]: Accepted publickey for core from 147.75.109.163 port 38378 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:12.157092 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:12.163873 systemd-logind[1576]: New session 7 of user core. Sep 12 17:39:12.171475 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:39:12.248762 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:39:12.250007 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:12.271134 sudo[1779]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:12.275097 sshd[1772]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:12.286367 systemd[1]: Started sshd@7-159.223.204.96:22-147.75.109.163:38390.service - OpenSSH per-connection server daemon (147.75.109.163:38390). Sep 12 17:39:12.286937 systemd[1]: sshd@6-159.223.204.96:22-147.75.109.163:38378.service: Deactivated successfully. Sep 12 17:39:12.291224 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:39:12.292715 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:39:12.296978 systemd-logind[1576]: Removed session 7. Sep 12 17:39:12.340142 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 38390 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:12.342585 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:12.348116 systemd-logind[1576]: New session 8 of user core. Sep 12 17:39:12.356439 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:39:12.422697 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:39:12.423052 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:12.428757 sudo[1789]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:12.436191 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:39:12.436510 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:12.458324 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:12.460917 auditctl[1792]: No rules Sep 12 17:39:12.461405 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:39:12.461802 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:12.467402 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:39:12.512753 augenrules[1811]: No rules Sep 12 17:39:12.512492 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:39:12.515126 sudo[1788]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:12.519511 sshd[1781]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:12.530225 systemd[1]: Started sshd@8-159.223.204.96:22-147.75.109.163:38402.service - OpenSSH per-connection server daemon (147.75.109.163:38402). Sep 12 17:39:12.530795 systemd[1]: sshd@7-159.223.204.96:22-147.75.109.163:38390.service: Deactivated successfully. Sep 12 17:39:12.537549 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:39:12.538876 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:39:12.541813 systemd-logind[1576]: Removed session 8. Sep 12 17:39:12.582816 sshd[1817]: Accepted publickey for core from 147.75.109.163 port 38402 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:39:12.585042 sshd[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:39:12.592199 systemd-logind[1576]: New session 9 of user core. Sep 12 17:39:12.599419 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:39:12.663622 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:39:12.664000 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:39:13.116762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:39:13.129100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:13.267260 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:39:13.269125 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:39:13.426290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:13.426791 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:13.491887 kubelet[1852]: E0912 17:39:13.490666 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:13.498102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:13.498313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:13.881759 dockerd[1844]: time="2025-09-12T17:39:13.881186843Z" level=info msg="Starting up" Sep 12 17:39:14.039726 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport607787234-merged.mount: Deactivated successfully. Sep 12 17:39:14.134995 dockerd[1844]: time="2025-09-12T17:39:14.134718433Z" level=info msg="Loading containers: start." Sep 12 17:39:14.296077 kernel: Initializing XFRM netlink socket Sep 12 17:39:14.411708 systemd-networkd[1220]: docker0: Link UP Sep 12 17:39:14.438373 dockerd[1844]: time="2025-09-12T17:39:14.438278680Z" level=info msg="Loading containers: done." Sep 12 17:39:14.471080 dockerd[1844]: time="2025-09-12T17:39:14.470955959Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:39:14.471313 dockerd[1844]: time="2025-09-12T17:39:14.471125028Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:39:14.471313 dockerd[1844]: time="2025-09-12T17:39:14.471292119Z" level=info msg="Daemon has completed initialization" Sep 12 17:39:14.526972 dockerd[1844]: time="2025-09-12T17:39:14.526563669Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:39:14.527144 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:39:15.585198 containerd[1597]: time="2025-09-12T17:39:15.585131237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:39:16.383735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192365975.mount: Deactivated successfully. Sep 12 17:39:17.930267 containerd[1597]: time="2025-09-12T17:39:17.930191879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:17.931807 containerd[1597]: time="2025-09-12T17:39:17.931744155Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:39:17.932879 containerd[1597]: time="2025-09-12T17:39:17.932465055Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:17.936461 containerd[1597]: time="2025-09-12T17:39:17.936412107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:17.938641 containerd[1597]: time="2025-09-12T17:39:17.938251714Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.353052128s" Sep 12 17:39:17.938641 containerd[1597]: time="2025-09-12T17:39:17.938316105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:39:17.939527 containerd[1597]: time="2025-09-12T17:39:17.939497626Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:39:19.634887 containerd[1597]: time="2025-09-12T17:39:19.633308866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:19.634887 containerd[1597]: time="2025-09-12T17:39:19.634813120Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:39:19.636245 containerd[1597]: time="2025-09-12T17:39:19.636192435Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:19.640121 containerd[1597]: time="2025-09-12T17:39:19.640059096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:19.641502 containerd[1597]: time="2025-09-12T17:39:19.641455279Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.701814078s" Sep 12 17:39:19.641773 containerd[1597]: time="2025-09-12T17:39:19.641749216Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:39:19.643668 containerd[1597]: time="2025-09-12T17:39:19.643610962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:39:21.041500 containerd[1597]: time="2025-09-12T17:39:21.041394628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:21.043277 containerd[1597]: time="2025-09-12T17:39:21.043004044Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:39:21.044890 containerd[1597]: time="2025-09-12T17:39:21.044174917Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:21.047626 containerd[1597]: time="2025-09-12T17:39:21.047560593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:21.049101 containerd[1597]: time="2025-09-12T17:39:21.048905782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.40524118s" Sep 12 17:39:21.049101 containerd[1597]: time="2025-09-12T17:39:21.048952808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:39:21.049552 containerd[1597]: time="2025-09-12T17:39:21.049474852Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:39:22.471638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743824957.mount: Deactivated successfully. Sep 12 17:39:23.302676 containerd[1597]: time="2025-09-12T17:39:23.302560453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:23.304197 containerd[1597]: time="2025-09-12T17:39:23.304129158Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:39:23.306885 containerd[1597]: time="2025-09-12T17:39:23.305411265Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:23.307928 containerd[1597]: time="2025-09-12T17:39:23.307885918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:23.309146 containerd[1597]: time="2025-09-12T17:39:23.309080113Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.259567512s" Sep 12 17:39:23.309361 containerd[1597]: time="2025-09-12T17:39:23.309312391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:39:23.310208 containerd[1597]: time="2025-09-12T17:39:23.310181280Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:39:23.312135 systemd-resolved[1483]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 17:39:23.739903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:39:23.748241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:23.989178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:23.999702 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:24.009471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173418799.mount: Deactivated successfully. Sep 12 17:39:24.085090 kubelet[2099]: E0912 17:39:24.084972 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:24.091776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:24.092150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:25.335003 containerd[1597]: time="2025-09-12T17:39:25.334929523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.337813 containerd[1597]: time="2025-09-12T17:39:25.337701651Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:39:25.339079 containerd[1597]: time="2025-09-12T17:39:25.339011591Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.343999 containerd[1597]: time="2025-09-12T17:39:25.343882922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.345287 containerd[1597]: time="2025-09-12T17:39:25.344964018Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.034742042s" Sep 12 17:39:25.345287 containerd[1597]: time="2025-09-12T17:39:25.345023912Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:39:25.347300 containerd[1597]: time="2025-09-12T17:39:25.346887075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:39:25.888451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328240990.mount: Deactivated successfully. Sep 12 17:39:25.897110 containerd[1597]: time="2025-09-12T17:39:25.897023930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.898965 containerd[1597]: time="2025-09-12T17:39:25.898684221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:39:25.898965 containerd[1597]: time="2025-09-12T17:39:25.898906129Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.902555 containerd[1597]: time="2025-09-12T17:39:25.902476455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:25.904432 containerd[1597]: time="2025-09-12T17:39:25.904207248Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 557.276907ms" Sep 12 17:39:25.904432 containerd[1597]: time="2025-09-12T17:39:25.904273584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:39:25.905840 containerd[1597]: time="2025-09-12T17:39:25.905786889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:39:26.373241 systemd-resolved[1483]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 17:39:26.429207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486561447.mount: Deactivated successfully. Sep 12 17:39:28.813034 containerd[1597]: time="2025-09-12T17:39:28.812962368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:28.814022 containerd[1597]: time="2025-09-12T17:39:28.813964242Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:39:28.815692 containerd[1597]: time="2025-09-12T17:39:28.815648379Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:28.823890 containerd[1597]: time="2025-09-12T17:39:28.821844546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:28.824306 containerd[1597]: time="2025-09-12T17:39:28.824253807Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.918402341s" Sep 12 17:39:28.824413 containerd[1597]: time="2025-09-12T17:39:28.824397269Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:39:32.013654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:32.025337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:32.068897 systemd[1]: Reloading requested from client PID 2242 ('systemctl') (unit session-9.scope)... Sep 12 17:39:32.068922 systemd[1]: Reloading... Sep 12 17:39:32.190890 zram_generator::config[2277]: No configuration found. Sep 12 17:39:32.392206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:32.494009 systemd[1]: Reloading finished in 424 ms. Sep 12 17:39:32.560014 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:39:32.560176 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:39:32.560659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:32.571405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:32.737103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:32.752572 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:32.811122 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:32.811122 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:32.811122 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:32.811644 kubelet[2347]: I0912 17:39:32.811168 2347 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:33.511336 kubelet[2347]: I0912 17:39:33.511019 2347 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:39:33.511336 kubelet[2347]: I0912 17:39:33.511315 2347 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:33.511609 kubelet[2347]: I0912 17:39:33.511587 2347 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:39:33.541139 kubelet[2347]: I0912 17:39:33.540557 2347 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:33.542327 kubelet[2347]: E0912 17:39:33.542277 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.223.204.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:33.551961 kubelet[2347]: E0912 17:39:33.551902 2347 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:33.551961 kubelet[2347]: I0912 17:39:33.551943 2347 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:33.557311 kubelet[2347]: I0912 17:39:33.557229 2347 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:33.557761 kubelet[2347]: I0912 17:39:33.557725 2347 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:39:33.557925 kubelet[2347]: I0912 17:39:33.557877 2347 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:33.558183 kubelet[2347]: I0912 17:39:33.557925 2347 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-9-2d91ca838a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:39:33.558183 kubelet[2347]: I0912 17:39:33.558185 2347 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:33.558367 kubelet[2347]: I0912 17:39:33.558205 2347 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:39:33.558367 kubelet[2347]: I0912 17:39:33.558357 2347 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:33.561366 kubelet[2347]: I0912 17:39:33.561267 2347 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:39:33.561366 kubelet[2347]: I0912 17:39:33.561329 2347 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:33.561366 kubelet[2347]: I0912 17:39:33.561382 2347 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:39:33.561711 kubelet[2347]: I0912 17:39:33.561420 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:33.571887 kubelet[2347]: W0912 17:39:33.571249 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.204.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-9-2d91ca838a&limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:33.571887 kubelet[2347]: E0912 17:39:33.571334 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.204.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-9-2d91ca838a&limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:33.572906 kubelet[2347]: W0912 17:39:33.572810 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.204.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:33.573013 kubelet[2347]: E0912 17:39:33.572923 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.204.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:33.573118 kubelet[2347]: I0912 17:39:33.573096 2347 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:33.576562 kubelet[2347]: I0912 17:39:33.576505 2347 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:39:33.576697 kubelet[2347]: W0912 17:39:33.576603 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:39:33.578619 kubelet[2347]: I0912 17:39:33.578220 2347 server.go:1274] "Started kubelet" Sep 12 17:39:33.588554 kubelet[2347]: E0912 17:39:33.585100 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.223.204.96:6443/api/v1/namespaces/default/events\": dial tcp 159.223.204.96:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-9-2d91ca838a.186499b8cd447b97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-9-2d91ca838a,UID:ci-4081.3.6-9-2d91ca838a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-9-2d91ca838a,},FirstTimestamp:2025-09-12 17:39:33.578185623 +0000 UTC m=+0.820969239,LastTimestamp:2025-09-12 17:39:33.578185623 +0000 UTC m=+0.820969239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-9-2d91ca838a,}" Sep 12 17:39:33.588554 kubelet[2347]: I0912 17:39:33.586622 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:33.588554 kubelet[2347]: I0912 17:39:33.586938 2347 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:33.588554 kubelet[2347]: I0912 17:39:33.587014 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:33.588554 kubelet[2347]: I0912 17:39:33.588179 2347 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:39:33.590738 kubelet[2347]: I0912 17:39:33.590715 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:33.593095 kubelet[2347]: I0912 17:39:33.593061 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:33.597663 kubelet[2347]: I0912 17:39:33.597620 2347 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:39:33.597832 kubelet[2347]: I0912 17:39:33.597807 2347 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:39:33.597919 kubelet[2347]: I0912 17:39:33.597906 2347 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:33.599076 kubelet[2347]: W0912 17:39:33.598661 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.204.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:33.599076 kubelet[2347]: E0912 17:39:33.598722 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.204.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:33.599076 kubelet[2347]: I0912 17:39:33.598996 2347 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:39:33.599076 kubelet[2347]: I0912 17:39:33.599066 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:33.600439 kubelet[2347]: E0912 17:39:33.600107 2347 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-9-2d91ca838a\" not found" Sep 12 17:39:33.600547 kubelet[2347]: E0912 17:39:33.600483 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.204.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-2d91ca838a?timeout=10s\": dial tcp 159.223.204.96:6443: connect: connection refused" interval="200ms" Sep 12 17:39:33.600941 kubelet[2347]: E0912 17:39:33.600709 2347 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:33.601195 kubelet[2347]: I0912 17:39:33.601165 2347 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:39:33.625203 kubelet[2347]: I0912 17:39:33.625146 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:33.627686 kubelet[2347]: I0912 17:39:33.627627 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:33.627686 kubelet[2347]: I0912 17:39:33.627668 2347 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:39:33.627976 kubelet[2347]: I0912 17:39:33.627703 2347 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:39:33.627976 kubelet[2347]: E0912 17:39:33.627753 2347 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:33.646032 kubelet[2347]: W0912 17:39:33.645786 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.204.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:33.646032 kubelet[2347]: E0912 17:39:33.645933 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.204.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:33.651311 kubelet[2347]: I0912 17:39:33.651062 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:39:33.651311 kubelet[2347]: I0912 17:39:33.651091 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:33.651311 kubelet[2347]: I0912 17:39:33.651144 2347 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:33.656358 kubelet[2347]: I0912 17:39:33.656138 2347 policy_none.go:49] "None policy: Start" Sep 12 17:39:33.657752 kubelet[2347]: I0912 17:39:33.657699 2347 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:39:33.658124 kubelet[2347]: I0912 17:39:33.657988 2347 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:33.670045 kubelet[2347]: I0912 17:39:33.669982 2347 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:39:33.670330 kubelet[2347]: I0912 17:39:33.670301 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:33.670402 kubelet[2347]: I0912 17:39:33.670337 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:33.675048 kubelet[2347]: I0912 17:39:33.674904 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:33.678914 kubelet[2347]: E0912 17:39:33.677918 2347 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-9-2d91ca838a\" not found" Sep 12 17:39:33.772616 kubelet[2347]: I0912 17:39:33.772461 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.774054 kubelet[2347]: E0912 17:39:33.773996 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.204.96:6443/api/v1/nodes\": dial tcp 159.223.204.96:6443: connect: connection refused" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.801956 kubelet[2347]: E0912 17:39:33.801892 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.204.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-2d91ca838a?timeout=10s\": dial tcp 159.223.204.96:6443: connect: connection refused" interval="400ms" Sep 12 17:39:33.898397 kubelet[2347]: I0912 17:39:33.898315 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7f23094a9674a24a9056cda09016f47-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-9-2d91ca838a\" (UID: \"b7f23094a9674a24a9056cda09016f47\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.898397 kubelet[2347]: I0912 17:39:33.898387 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899002 kubelet[2347]: I0912 17:39:33.898424 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899002 kubelet[2347]: I0912 17:39:33.898453 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899002 kubelet[2347]: I0912 17:39:33.898481 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7f23094a9674a24a9056cda09016f47-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-9-2d91ca838a\" (UID: \"b7f23094a9674a24a9056cda09016f47\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899002 kubelet[2347]: I0912 17:39:33.898511 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7f23094a9674a24a9056cda09016f47-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-9-2d91ca838a\" (UID: \"b7f23094a9674a24a9056cda09016f47\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899002 kubelet[2347]: I0912 17:39:33.898538 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899161 kubelet[2347]: I0912 17:39:33.898566 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.899161 kubelet[2347]: I0912 17:39:33.898594 2347 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/530ae35b2dcd74f339d833ccca65e0c0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-9-2d91ca838a\" (UID: \"530ae35b2dcd74f339d833ccca65e0c0\") " pod="kube-system/kube-scheduler-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.976018 kubelet[2347]: I0912 17:39:33.975885 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:33.976513 kubelet[2347]: E0912 17:39:33.976477 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.204.96:6443/api/v1/nodes\": dial tcp 159.223.204.96:6443: connect: connection refused" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:34.037080 kubelet[2347]: E0912 17:39:34.036392 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:34.038725 kubelet[2347]: E0912 17:39:34.037222 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:34.038894 containerd[1597]: time="2025-09-12T17:39:34.038393641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-9-2d91ca838a,Uid:530ae35b2dcd74f339d833ccca65e0c0,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:34.039977 kubelet[2347]: E0912 17:39:34.039680 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:34.042906 containerd[1597]: time="2025-09-12T17:39:34.042726017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-9-2d91ca838a,Uid:542007396257481dada16eadafe59668,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:34.043768 containerd[1597]: time="2025-09-12T17:39:34.042737331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-9-2d91ca838a,Uid:b7f23094a9674a24a9056cda09016f47,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:34.046401 systemd-resolved[1483]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 17:39:34.203391 kubelet[2347]: E0912 17:39:34.203317 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.204.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-2d91ca838a?timeout=10s\": dial tcp 159.223.204.96:6443: connect: connection refused" interval="800ms" Sep 12 17:39:34.379316 kubelet[2347]: I0912 17:39:34.379024 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:34.379494 kubelet[2347]: E0912 17:39:34.379450 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.204.96:6443/api/v1/nodes\": dial tcp 159.223.204.96:6443: connect: connection refused" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:34.536434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494165524.mount: Deactivated successfully. Sep 12 17:39:34.548745 containerd[1597]: time="2025-09-12T17:39:34.548661349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:34.550091 containerd[1597]: time="2025-09-12T17:39:34.550012510Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:34.551321 containerd[1597]: time="2025-09-12T17:39:34.551160041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:39:34.551321 containerd[1597]: time="2025-09-12T17:39:34.551267432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:34.552641 containerd[1597]: time="2025-09-12T17:39:34.552587040Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:34.553275 containerd[1597]: time="2025-09-12T17:39:34.553176354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:34.555812 containerd[1597]: time="2025-09-12T17:39:34.555758696Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:34.558301 containerd[1597]: time="2025-09-12T17:39:34.558108502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.612946ms" Sep 12 17:39:34.560183 containerd[1597]: time="2025-09-12T17:39:34.559756403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:34.562764 containerd[1597]: time="2025-09-12T17:39:34.562660645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 518.934362ms" Sep 12 17:39:34.566145 containerd[1597]: time="2025-09-12T17:39:34.566073372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.217679ms" Sep 12 17:39:34.622923 kubelet[2347]: W0912 17:39:34.618816 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.223.204.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-9-2d91ca838a&limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:34.622923 kubelet[2347]: E0912 17:39:34.618919 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.223.204.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-9-2d91ca838a&limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:34.764981 containerd[1597]: time="2025-09-12T17:39:34.762212461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:34.764981 containerd[1597]: time="2025-09-12T17:39:34.762305665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:34.764981 containerd[1597]: time="2025-09-12T17:39:34.762332620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:34.768006 containerd[1597]: time="2025-09-12T17:39:34.762485255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:34.786729 containerd[1597]: time="2025-09-12T17:39:34.786318274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:34.786729 containerd[1597]: time="2025-09-12T17:39:34.786422031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:34.786729 containerd[1597]: time="2025-09-12T17:39:34.786447168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:34.786729 containerd[1597]: time="2025-09-12T17:39:34.786597281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:34.799005 containerd[1597]: time="2025-09-12T17:39:34.798237532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:34.799005 containerd[1597]: time="2025-09-12T17:39:34.798327925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:34.799005 containerd[1597]: time="2025-09-12T17:39:34.798339792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:34.799821 containerd[1597]: time="2025-09-12T17:39:34.798713807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:34.835412 kubelet[2347]: W0912 17:39:34.834051 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.223.204.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:34.835412 kubelet[2347]: E0912 17:39:34.834163 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.223.204.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:34.891897 kubelet[2347]: W0912 17:39:34.891395 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.223.204.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:34.891897 kubelet[2347]: E0912 17:39:34.891708 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.223.204.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:34.915886 containerd[1597]: time="2025-09-12T17:39:34.915742902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-9-2d91ca838a,Uid:542007396257481dada16eadafe59668,Namespace:kube-system,Attempt:0,} returns sandbox id \"b76313ade7b3cc497a7846ee998b1f352ca7b6af3208d2f8c9b4a1b00bc97c6b\"" Sep 12 17:39:34.926072 kubelet[2347]: E0912 17:39:34.925283 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:34.929194 containerd[1597]: time="2025-09-12T17:39:34.928950891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-9-2d91ca838a,Uid:b7f23094a9674a24a9056cda09016f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a005ec0c52a020d9067df57a8637c2310c81c6c1c036d2e2086cf6278fe9806\"" Sep 12 17:39:34.931331 kubelet[2347]: E0912 17:39:34.931286 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:34.934204 containerd[1597]: time="2025-09-12T17:39:34.933766593Z" level=info msg="CreateContainer within sandbox \"b76313ade7b3cc497a7846ee998b1f352ca7b6af3208d2f8c9b4a1b00bc97c6b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:39:34.936942 containerd[1597]: time="2025-09-12T17:39:34.936700642Z" level=info msg="CreateContainer within sandbox \"9a005ec0c52a020d9067df57a8637c2310c81c6c1c036d2e2086cf6278fe9806\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:39:34.962741 containerd[1597]: time="2025-09-12T17:39:34.962669953Z" level=info msg="CreateContainer within sandbox \"b76313ade7b3cc497a7846ee998b1f352ca7b6af3208d2f8c9b4a1b00bc97c6b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33b16e7e4f8185bb0d28d3cfbffd0170146c6541d1b8fdd4b495df0d7318298d\"" Sep 12 17:39:34.965003 containerd[1597]: time="2025-09-12T17:39:34.964941803Z" level=info msg="StartContainer for \"33b16e7e4f8185bb0d28d3cfbffd0170146c6541d1b8fdd4b495df0d7318298d\"" Sep 12 17:39:34.971363 containerd[1597]: time="2025-09-12T17:39:34.971233268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-9-2d91ca838a,Uid:530ae35b2dcd74f339d833ccca65e0c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed2de88a68467c7615e2a747cce3823a58042b5b4a4015d99ca23c6726586cd\"" Sep 12 17:39:34.973631 kubelet[2347]: E0912 17:39:34.973421 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:34.974607 containerd[1597]: time="2025-09-12T17:39:34.974543677Z" level=info msg="CreateContainer within sandbox \"9a005ec0c52a020d9067df57a8637c2310c81c6c1c036d2e2086cf6278fe9806\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8069be0f0d55da7ae16b40349ff75051a6559ca3835b9d84aa2c967ec7bb46d0\"" Sep 12 17:39:34.975701 containerd[1597]: time="2025-09-12T17:39:34.975667201Z" level=info msg="StartContainer for \"8069be0f0d55da7ae16b40349ff75051a6559ca3835b9d84aa2c967ec7bb46d0\"" Sep 12 17:39:34.976316 kubelet[2347]: W0912 17:39:34.976122 2347 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.223.204.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.223.204.96:6443: connect: connection refused Sep 12 17:39:34.976316 kubelet[2347]: E0912 17:39:34.976232 2347 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.223.204.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.223.204.96:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:34.983175 containerd[1597]: time="2025-09-12T17:39:34.983027541Z" level=info msg="CreateContainer within sandbox \"aed2de88a68467c7615e2a747cce3823a58042b5b4a4015d99ca23c6726586cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:39:35.005690 kubelet[2347]: E0912 17:39:35.005491 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.223.204.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-9-2d91ca838a?timeout=10s\": dial tcp 159.223.204.96:6443: connect: connection refused" interval="1.6s" Sep 12 17:39:35.023820 containerd[1597]: time="2025-09-12T17:39:35.022205173Z" level=info msg="CreateContainer within sandbox \"aed2de88a68467c7615e2a747cce3823a58042b5b4a4015d99ca23c6726586cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46fb075e2e78f153f097c069cf8d1af6436f7d008b315e8a9e25219087c94bd2\"" Sep 12 17:39:35.026393 containerd[1597]: time="2025-09-12T17:39:35.026096687Z" level=info msg="StartContainer for \"46fb075e2e78f153f097c069cf8d1af6436f7d008b315e8a9e25219087c94bd2\"" Sep 12 17:39:35.184512 kubelet[2347]: I0912 17:39:35.184459 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:35.184952 kubelet[2347]: E0912 17:39:35.184915 2347 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://159.223.204.96:6443/api/v1/nodes\": dial tcp 159.223.204.96:6443: connect: connection refused" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:35.188845 containerd[1597]: time="2025-09-12T17:39:35.179243483Z" level=info msg="StartContainer for \"33b16e7e4f8185bb0d28d3cfbffd0170146c6541d1b8fdd4b495df0d7318298d\" returns successfully" Sep 12 17:39:35.197944 containerd[1597]: time="2025-09-12T17:39:35.197776841Z" level=info msg="StartContainer for \"8069be0f0d55da7ae16b40349ff75051a6559ca3835b9d84aa2c967ec7bb46d0\" returns successfully" Sep 12 17:39:35.285207 containerd[1597]: time="2025-09-12T17:39:35.285003770Z" level=info msg="StartContainer for \"46fb075e2e78f153f097c069cf8d1af6436f7d008b315e8a9e25219087c94bd2\" returns successfully" Sep 12 17:39:35.658770 kubelet[2347]: E0912 17:39:35.658583 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:35.666138 kubelet[2347]: E0912 17:39:35.666094 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:35.675587 kubelet[2347]: E0912 17:39:35.675542 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:36.677737 kubelet[2347]: E0912 17:39:36.677670 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:36.787748 kubelet[2347]: I0912 17:39:36.787675 2347 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:36.819844 kubelet[2347]: E0912 17:39:36.819804 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:37.931329 kubelet[2347]: E0912 17:39:37.931273 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-9-2d91ca838a\" not found" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:38.022378 kubelet[2347]: I0912 17:39:38.022325 2347 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:38.575951 kubelet[2347]: I0912 17:39:38.575428 2347 apiserver.go:52] "Watching apiserver" Sep 12 17:39:38.598132 kubelet[2347]: I0912 17:39:38.598063 2347 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:39:38.743822 kubelet[2347]: W0912 17:39:38.743753 2347 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:38.744151 kubelet[2347]: E0912 17:39:38.744092 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:39.687703 kubelet[2347]: E0912 17:39:39.687635 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:40.202051 systemd[1]: Reloading requested from client PID 2618 ('systemctl') (unit session-9.scope)... Sep 12 17:39:40.202074 systemd[1]: Reloading... Sep 12 17:39:40.288797 zram_generator::config[2653]: No configuration found. Sep 12 17:39:40.473701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:40.576029 systemd[1]: Reloading finished in 373 ms. Sep 12 17:39:40.627215 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:40.638461 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:40.638982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:40.650604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:40.850156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:40.864572 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:40.957678 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:40.961441 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:40.962034 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:40.962303 kubelet[2717]: I0912 17:39:40.962212 2717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:40.987528 kubelet[2717]: I0912 17:39:40.985148 2717 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:39:40.987528 kubelet[2717]: I0912 17:39:40.985194 2717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:40.987528 kubelet[2717]: I0912 17:39:40.985729 2717 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:39:40.997238 kubelet[2717]: I0912 17:39:40.997192 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:39:41.010390 kubelet[2717]: I0912 17:39:41.010344 2717 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:41.024891 kubelet[2717]: E0912 17:39:41.022701 2717 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:41.024891 kubelet[2717]: I0912 17:39:41.022790 2717 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:41.035694 kubelet[2717]: I0912 17:39:41.035645 2717 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:41.038839 kubelet[2717]: I0912 17:39:41.038371 2717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:39:41.039541 kubelet[2717]: I0912 17:39:41.039326 2717 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:41.042395 kubelet[2717]: I0912 17:39:41.041923 2717 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-9-2d91ca838a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:39:41.042794 kubelet[2717]: I0912 17:39:41.042769 2717 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:41.042958 kubelet[2717]: I0912 17:39:41.042942 2717 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:39:41.043134 kubelet[2717]: I0912 17:39:41.043107 2717 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:41.043407 kubelet[2717]: I0912 17:39:41.043390 2717 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:39:41.043498 kubelet[2717]: I0912 17:39:41.043488 2717 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:41.045636 kubelet[2717]: I0912 17:39:41.045608 2717 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:39:41.045809 kubelet[2717]: I0912 17:39:41.045793 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:41.053382 kubelet[2717]: I0912 17:39:41.053341 2717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:41.054898 kubelet[2717]: I0912 17:39:41.054376 2717 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:39:41.055915 kubelet[2717]: I0912 17:39:41.055884 2717 server.go:1274] "Started kubelet" Sep 12 17:39:41.066377 kubelet[2717]: I0912 17:39:41.066297 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:41.076669 kubelet[2717]: I0912 17:39:41.075919 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:41.077071 kubelet[2717]: I0912 17:39:41.077048 2717 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:39:41.077570 kubelet[2717]: E0912 17:39:41.077534 2717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-9-2d91ca838a\" not found" Sep 12 17:39:41.078635 kubelet[2717]: I0912 17:39:41.078602 2717 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:39:41.083704 kubelet[2717]: I0912 17:39:41.083670 2717 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:41.084500 kubelet[2717]: I0912 17:39:41.084433 2717 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:39:41.086396 kubelet[2717]: I0912 17:39:41.085961 2717 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:39:41.086396 kubelet[2717]: I0912 17:39:41.086120 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:41.088657 kubelet[2717]: I0912 17:39:41.088549 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:41.090880 kubelet[2717]: I0912 17:39:41.089347 2717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:41.090880 kubelet[2717]: I0912 17:39:41.089602 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:41.110372 kubelet[2717]: I0912 17:39:41.107947 2717 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:39:41.119717 kubelet[2717]: E0912 17:39:41.119681 2717 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:41.123750 kubelet[2717]: I0912 17:39:41.123703 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:41.125337 kubelet[2717]: I0912 17:39:41.125304 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:41.125490 kubelet[2717]: I0912 17:39:41.125480 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:39:41.125549 kubelet[2717]: I0912 17:39:41.125541 2717 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:39:41.125639 kubelet[2717]: E0912 17:39:41.125623 2717 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.210119 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.210145 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.210167 2717 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.210334 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.210347 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.210368 2717 policy_none.go:49] "None policy: Start" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.211009 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.211032 2717 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:41.211419 kubelet[2717]: I0912 17:39:41.211236 2717 state_mem.go:75] "Updated machine memory state" Sep 12 17:39:41.215140 kubelet[2717]: I0912 17:39:41.214965 2717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:39:41.215284 kubelet[2717]: I0912 17:39:41.215212 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:41.215284 kubelet[2717]: I0912 17:39:41.215225 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:41.219127 kubelet[2717]: I0912 17:39:41.218713 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:41.241948 kubelet[2717]: W0912 17:39:41.240925 2717 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:41.241948 kubelet[2717]: W0912 17:39:41.241198 2717 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:41.241948 kubelet[2717]: E0912 17:39:41.241264 2717 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.6-9-2d91ca838a\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.242558 kubelet[2717]: W0912 17:39:41.242531 2717 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:41.284913 kubelet[2717]: I0912 17:39:41.284601 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7f23094a9674a24a9056cda09016f47-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-9-2d91ca838a\" (UID: \"b7f23094a9674a24a9056cda09016f47\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.284913 kubelet[2717]: I0912 17:39:41.284657 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7f23094a9674a24a9056cda09016f47-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-9-2d91ca838a\" (UID: \"b7f23094a9674a24a9056cda09016f47\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.284913 kubelet[2717]: I0912 17:39:41.284684 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.284913 kubelet[2717]: I0912 17:39:41.284703 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.284913 kubelet[2717]: I0912 17:39:41.284723 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7f23094a9674a24a9056cda09016f47-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-9-2d91ca838a\" (UID: \"b7f23094a9674a24a9056cda09016f47\") " pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.285239 kubelet[2717]: I0912 17:39:41.284740 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.285239 kubelet[2717]: I0912 17:39:41.284756 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.285239 kubelet[2717]: I0912 17:39:41.284772 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/542007396257481dada16eadafe59668-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-9-2d91ca838a\" (UID: \"542007396257481dada16eadafe59668\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.285239 kubelet[2717]: I0912 17:39:41.284797 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/530ae35b2dcd74f339d833ccca65e0c0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-9-2d91ca838a\" (UID: \"530ae35b2dcd74f339d833ccca65e0c0\") " pod="kube-system/kube-scheduler-ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.329032 kubelet[2717]: I0912 17:39:41.326391 2717 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.338408 kubelet[2717]: I0912 17:39:41.338332 2717 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.339271 kubelet[2717]: I0912 17:39:41.339235 2717 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-9-2d91ca838a" Sep 12 17:39:41.543086 kubelet[2717]: E0912 17:39:41.542263 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:41.543086 kubelet[2717]: E0912 17:39:41.542896 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:41.544917 kubelet[2717]: E0912 17:39:41.544323 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:42.053968 kubelet[2717]: I0912 17:39:42.053891 2717 apiserver.go:52] "Watching apiserver" Sep 12 17:39:42.079909 kubelet[2717]: I0912 17:39:42.079844 2717 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:39:42.163741 kubelet[2717]: E0912 17:39:42.163669 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:42.164544 kubelet[2717]: E0912 17:39:42.164283 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:42.164910 kubelet[2717]: E0912 17:39:42.164679 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:42.215042 kubelet[2717]: I0912 17:39:42.214922 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-9-2d91ca838a" podStartSLOduration=4.214900768 podStartE2EDuration="4.214900768s" podCreationTimestamp="2025-09-12 17:39:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:42.214389288 +0000 UTC m=+1.342284986" watchObservedRunningTime="2025-09-12 17:39:42.214900768 +0000 UTC m=+1.342796443" Sep 12 17:39:42.250728 kubelet[2717]: I0912 17:39:42.249215 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-9-2d91ca838a" podStartSLOduration=1.24918858 podStartE2EDuration="1.24918858s" podCreationTimestamp="2025-09-12 17:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:42.234084028 +0000 UTC m=+1.361979708" watchObservedRunningTime="2025-09-12 17:39:42.24918858 +0000 UTC m=+1.377084264" Sep 12 17:39:42.266311 kubelet[2717]: I0912 17:39:42.266223 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-9-2d91ca838a" podStartSLOduration=1.266199539 podStartE2EDuration="1.266199539s" podCreationTimestamp="2025-09-12 17:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:42.249422506 +0000 UTC m=+1.377318187" watchObservedRunningTime="2025-09-12 17:39:42.266199539 +0000 UTC m=+1.394095215" Sep 12 17:39:43.165619 kubelet[2717]: E0912 17:39:43.165481 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:44.814929 update_engine[1578]: I20250912 17:39:44.814476 1578 update_attempter.cc:509] Updating boot flags... Sep 12 17:39:44.862443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2767) Sep 12 17:39:44.890912 kubelet[2717]: E0912 17:39:44.887720 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:44.947903 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2769) Sep 12 17:39:45.021995 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2769) Sep 12 17:39:45.512323 kubelet[2717]: I0912 17:39:45.512259 2717 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:39:45.514609 containerd[1597]: time="2025-09-12T17:39:45.514538176Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:39:45.515398 kubelet[2717]: I0912 17:39:45.515028 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:39:46.533578 kubelet[2717]: I0912 17:39:46.532823 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56363b36-4abd-435f-9008-b6a2910a88b8-kube-proxy\") pod \"kube-proxy-lxvm7\" (UID: \"56363b36-4abd-435f-9008-b6a2910a88b8\") " pod="kube-system/kube-proxy-lxvm7" Sep 12 17:39:46.533578 kubelet[2717]: I0912 17:39:46.532912 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56363b36-4abd-435f-9008-b6a2910a88b8-xtables-lock\") pod \"kube-proxy-lxvm7\" (UID: \"56363b36-4abd-435f-9008-b6a2910a88b8\") " pod="kube-system/kube-proxy-lxvm7" Sep 12 17:39:46.533578 kubelet[2717]: I0912 17:39:46.532941 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56363b36-4abd-435f-9008-b6a2910a88b8-lib-modules\") pod \"kube-proxy-lxvm7\" (UID: \"56363b36-4abd-435f-9008-b6a2910a88b8\") " pod="kube-system/kube-proxy-lxvm7" Sep 12 17:39:46.533578 kubelet[2717]: I0912 17:39:46.532974 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5d6k\" (UniqueName: \"kubernetes.io/projected/56363b36-4abd-435f-9008-b6a2910a88b8-kube-api-access-g5d6k\") pod \"kube-proxy-lxvm7\" (UID: \"56363b36-4abd-435f-9008-b6a2910a88b8\") " pod="kube-system/kube-proxy-lxvm7" Sep 12 17:39:46.635270 kubelet[2717]: I0912 17:39:46.634937 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b75831e-17e7-493a-b7bc-7c7e2b2e8c2e-var-lib-calico\") pod \"tigera-operator-58fc44c59b-zzd2q\" (UID: \"3b75831e-17e7-493a-b7bc-7c7e2b2e8c2e\") " pod="tigera-operator/tigera-operator-58fc44c59b-zzd2q" Sep 12 17:39:46.635270 kubelet[2717]: I0912 17:39:46.635047 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b62w7\" (UniqueName: \"kubernetes.io/projected/3b75831e-17e7-493a-b7bc-7c7e2b2e8c2e-kube-api-access-b62w7\") pod \"tigera-operator-58fc44c59b-zzd2q\" (UID: \"3b75831e-17e7-493a-b7bc-7c7e2b2e8c2e\") " pod="tigera-operator/tigera-operator-58fc44c59b-zzd2q" Sep 12 17:39:46.766966 kubelet[2717]: E0912 17:39:46.766462 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:46.767407 containerd[1597]: time="2025-09-12T17:39:46.767373992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxvm7,Uid:56363b36-4abd-435f-9008-b6a2910a88b8,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:46.801816 containerd[1597]: time="2025-09-12T17:39:46.800959408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:46.801816 containerd[1597]: time="2025-09-12T17:39:46.801030693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:46.801816 containerd[1597]: time="2025-09-12T17:39:46.801060338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:46.801816 containerd[1597]: time="2025-09-12T17:39:46.801237135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:46.858401 containerd[1597]: time="2025-09-12T17:39:46.858355133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxvm7,Uid:56363b36-4abd-435f-9008-b6a2910a88b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7635b347a9a03b47d24f02172f34eb6b633b7da92e1f1c23527e33b789ed172\"" Sep 12 17:39:46.860148 kubelet[2717]: E0912 17:39:46.860112 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:46.866062 containerd[1597]: time="2025-09-12T17:39:46.865997286Z" level=info msg="CreateContainer within sandbox \"a7635b347a9a03b47d24f02172f34eb6b633b7da92e1f1c23527e33b789ed172\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:39:46.894426 containerd[1597]: time="2025-09-12T17:39:46.894350624Z" level=info msg="CreateContainer within sandbox \"a7635b347a9a03b47d24f02172f34eb6b633b7da92e1f1c23527e33b789ed172\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcbbcc55db6d09546cf92e89a19dc8e5cca7797253326cee0a155cb10355a76b\"" Sep 12 17:39:46.895608 containerd[1597]: time="2025-09-12T17:39:46.895354526Z" level=info msg="StartContainer for \"bcbbcc55db6d09546cf92e89a19dc8e5cca7797253326cee0a155cb10355a76b\"" Sep 12 17:39:46.918490 containerd[1597]: time="2025-09-12T17:39:46.918416816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-zzd2q,Uid:3b75831e-17e7-493a-b7bc-7c7e2b2e8c2e,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:39:46.959980 containerd[1597]: time="2025-09-12T17:39:46.958438765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:46.959980 containerd[1597]: time="2025-09-12T17:39:46.958523199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:46.959980 containerd[1597]: time="2025-09-12T17:39:46.958539174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:46.959980 containerd[1597]: time="2025-09-12T17:39:46.958659310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:47.014104 containerd[1597]: time="2025-09-12T17:39:47.014036723Z" level=info msg="StartContainer for \"bcbbcc55db6d09546cf92e89a19dc8e5cca7797253326cee0a155cb10355a76b\" returns successfully" Sep 12 17:39:47.066889 containerd[1597]: time="2025-09-12T17:39:47.065926192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-zzd2q,Uid:3b75831e-17e7-493a-b7bc-7c7e2b2e8c2e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4c541355aec5fabce78bfb14006c4cc8ad9f9ecffa8d9e7892004a0fc5403a4f\"" Sep 12 17:39:47.072431 containerd[1597]: time="2025-09-12T17:39:47.071270956Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:39:47.184968 kubelet[2717]: E0912 17:39:47.184823 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:47.668824 systemd[1]: run-containerd-runc-k8s.io-a7635b347a9a03b47d24f02172f34eb6b633b7da92e1f1c23527e33b789ed172-runc.dy8fqW.mount: Deactivated successfully. Sep 12 17:39:47.731761 kubelet[2717]: E0912 17:39:47.731357 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:47.761444 kubelet[2717]: I0912 17:39:47.761183 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lxvm7" podStartSLOduration=1.761155162 podStartE2EDuration="1.761155162s" podCreationTimestamp="2025-09-12 17:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:47.206075405 +0000 UTC m=+6.333971085" watchObservedRunningTime="2025-09-12 17:39:47.761155162 +0000 UTC m=+6.889050849" Sep 12 17:39:48.198041 kubelet[2717]: E0912 17:39:48.197964 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:48.434040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802683986.mount: Deactivated successfully. Sep 12 17:39:49.235819 containerd[1597]: time="2025-09-12T17:39:49.235744641Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:49.236780 containerd[1597]: time="2025-09-12T17:39:49.236681820Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:39:49.237838 containerd[1597]: time="2025-09-12T17:39:49.237768149Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:49.240670 containerd[1597]: time="2025-09-12T17:39:49.240279384Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:49.241415 containerd[1597]: time="2025-09-12T17:39:49.241371267Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.170036411s" Sep 12 17:39:49.241510 containerd[1597]: time="2025-09-12T17:39:49.241419386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:39:49.245257 containerd[1597]: time="2025-09-12T17:39:49.245207730Z" level=info msg="CreateContainer within sandbox \"4c541355aec5fabce78bfb14006c4cc8ad9f9ecffa8d9e7892004a0fc5403a4f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:39:49.266630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954759592.mount: Deactivated successfully. Sep 12 17:39:49.267437 containerd[1597]: time="2025-09-12T17:39:49.266986238Z" level=info msg="CreateContainer within sandbox \"4c541355aec5fabce78bfb14006c4cc8ad9f9ecffa8d9e7892004a0fc5403a4f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"12bae4fba682173e1d84028608a823cdac01e81ee893d8f1d7c1072cc282d3e6\"" Sep 12 17:39:49.269816 containerd[1597]: time="2025-09-12T17:39:49.268991795Z" level=info msg="StartContainer for \"12bae4fba682173e1d84028608a823cdac01e81ee893d8f1d7c1072cc282d3e6\"" Sep 12 17:39:49.314463 systemd[1]: run-containerd-runc-k8s.io-12bae4fba682173e1d84028608a823cdac01e81ee893d8f1d7c1072cc282d3e6-runc.vmH9lD.mount: Deactivated successfully. Sep 12 17:39:49.402067 containerd[1597]: time="2025-09-12T17:39:49.401945968Z" level=info msg="StartContainer for \"12bae4fba682173e1d84028608a823cdac01e81ee893d8f1d7c1072cc282d3e6\" returns successfully" Sep 12 17:39:50.340700 kubelet[2717]: E0912 17:39:50.340462 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:50.368142 kubelet[2717]: I0912 17:39:50.368001 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-zzd2q" podStartSLOduration=2.19527882 podStartE2EDuration="4.367966306s" podCreationTimestamp="2025-09-12 17:39:46 +0000 UTC" firstStartedPulling="2025-09-12 17:39:47.070391807 +0000 UTC m=+6.198287473" lastFinishedPulling="2025-09-12 17:39:49.243079288 +0000 UTC m=+8.370974959" observedRunningTime="2025-09-12 17:39:50.220382815 +0000 UTC m=+9.348278521" watchObservedRunningTime="2025-09-12 17:39:50.367966306 +0000 UTC m=+9.495862060" Sep 12 17:39:51.209166 kubelet[2717]: E0912 17:39:51.209119 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:54.908912 kubelet[2717]: E0912 17:39:54.906288 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:55.232567 kubelet[2717]: E0912 17:39:55.232105 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:39:55.365496 sudo[1824]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:55.373449 sshd[1817]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:55.380617 systemd[1]: sshd@8-159.223.204.96:22-147.75.109.163:38402.service: Deactivated successfully. Sep 12 17:39:55.388152 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:39:55.390467 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:39:55.413166 systemd-logind[1576]: Removed session 9. Sep 12 17:40:00.645137 kubelet[2717]: I0912 17:40:00.645085 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xgg\" (UniqueName: \"kubernetes.io/projected/1d3dc574-8672-46d9-b08d-a1283656dbdb-kube-api-access-g4xgg\") pod \"calico-typha-64d94c9b4b-zvbsv\" (UID: \"1d3dc574-8672-46d9-b08d-a1283656dbdb\") " pod="calico-system/calico-typha-64d94c9b4b-zvbsv" Sep 12 17:40:00.648040 kubelet[2717]: I0912 17:40:00.646036 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3dc574-8672-46d9-b08d-a1283656dbdb-tigera-ca-bundle\") pod \"calico-typha-64d94c9b4b-zvbsv\" (UID: \"1d3dc574-8672-46d9-b08d-a1283656dbdb\") " pod="calico-system/calico-typha-64d94c9b4b-zvbsv" Sep 12 17:40:00.648040 kubelet[2717]: I0912 17:40:00.646139 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1d3dc574-8672-46d9-b08d-a1283656dbdb-typha-certs\") pod \"calico-typha-64d94c9b4b-zvbsv\" (UID: \"1d3dc574-8672-46d9-b08d-a1283656dbdb\") " pod="calico-system/calico-typha-64d94c9b4b-zvbsv" Sep 12 17:40:00.864249 kubelet[2717]: E0912 17:40:00.862671 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:00.866275 containerd[1597]: time="2025-09-12T17:40:00.866084204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64d94c9b4b-zvbsv,Uid:1d3dc574-8672-46d9-b08d-a1283656dbdb,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:00.951253 kubelet[2717]: I0912 17:40:00.948312 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2xrt\" (UniqueName: \"kubernetes.io/projected/e2f18a05-6297-4f34-92d0-fc250acf58e2-kube-api-access-c2xrt\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.951253 kubelet[2717]: I0912 17:40:00.948383 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-cni-bin-dir\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.951253 kubelet[2717]: I0912 17:40:00.948414 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-policysync\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.951253 kubelet[2717]: I0912 17:40:00.948443 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-flexvol-driver-host\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.951253 kubelet[2717]: I0912 17:40:00.948469 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2f18a05-6297-4f34-92d0-fc250acf58e2-node-certs\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954464 kubelet[2717]: I0912 17:40:00.948495 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-cni-net-dir\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954464 kubelet[2717]: I0912 17:40:00.948519 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-cni-log-dir\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954464 kubelet[2717]: I0912 17:40:00.948547 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-lib-modules\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954464 kubelet[2717]: I0912 17:40:00.948575 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-xtables-lock\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954464 kubelet[2717]: I0912 17:40:00.948605 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f18a05-6297-4f34-92d0-fc250acf58e2-tigera-ca-bundle\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954718 kubelet[2717]: I0912 17:40:00.948647 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-var-lib-calico\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.954718 kubelet[2717]: I0912 17:40:00.948674 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2f18a05-6297-4f34-92d0-fc250acf58e2-var-run-calico\") pod \"calico-node-dwzbx\" (UID: \"e2f18a05-6297-4f34-92d0-fc250acf58e2\") " pod="calico-system/calico-node-dwzbx" Sep 12 17:40:00.965397 containerd[1597]: time="2025-09-12T17:40:00.965116139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:00.965397 containerd[1597]: time="2025-09-12T17:40:00.965271244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:00.965397 containerd[1597]: time="2025-09-12T17:40:00.965299006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:00.966218 containerd[1597]: time="2025-09-12T17:40:00.966079981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:01.073959 kubelet[2717]: E0912 17:40:01.070026 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.073959 kubelet[2717]: W0912 17:40:01.070171 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.073959 kubelet[2717]: E0912 17:40:01.070243 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.086010 kubelet[2717]: E0912 17:40:01.085953 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.086010 kubelet[2717]: W0912 17:40:01.085987 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.086010 kubelet[2717]: E0912 17:40:01.086021 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.109827 kubelet[2717]: E0912 17:40:01.109583 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.111881 kubelet[2717]: W0912 17:40:01.109621 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.111881 kubelet[2717]: E0912 17:40:01.109929 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.143900 kubelet[2717]: E0912 17:40:01.142444 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:01.183800 containerd[1597]: time="2025-09-12T17:40:01.183672385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64d94c9b4b-zvbsv,Uid:1d3dc574-8672-46d9-b08d-a1283656dbdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1e7008126b63331b99a6c46f46fbf3e9a65959dc9904f1ad07ed20852c404f8\"" Sep 12 17:40:01.185572 kubelet[2717]: E0912 17:40:01.185476 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:01.189221 containerd[1597]: time="2025-09-12T17:40:01.189166646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:40:01.215185 containerd[1597]: time="2025-09-12T17:40:01.214037476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwzbx,Uid:e2f18a05-6297-4f34-92d0-fc250acf58e2,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:01.227827 kubelet[2717]: E0912 17:40:01.227456 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.228360 kubelet[2717]: W0912 17:40:01.228326 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.228576 kubelet[2717]: E0912 17:40:01.228558 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.230002 kubelet[2717]: E0912 17:40:01.229974 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.230546 kubelet[2717]: W0912 17:40:01.230178 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.230546 kubelet[2717]: E0912 17:40:01.230265 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.231666 kubelet[2717]: E0912 17:40:01.231540 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.232500 kubelet[2717]: W0912 17:40:01.231872 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.232500 kubelet[2717]: E0912 17:40:01.231907 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.233885 kubelet[2717]: E0912 17:40:01.233436 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.233885 kubelet[2717]: W0912 17:40:01.233654 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.233885 kubelet[2717]: E0912 17:40:01.233816 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.235872 kubelet[2717]: E0912 17:40:01.235133 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.235872 kubelet[2717]: W0912 17:40:01.235156 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.235872 kubelet[2717]: E0912 17:40:01.235184 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.238950 kubelet[2717]: E0912 17:40:01.237133 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.238950 kubelet[2717]: W0912 17:40:01.237154 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.238950 kubelet[2717]: E0912 17:40:01.238708 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.239801 kubelet[2717]: E0912 17:40:01.239645 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.239992 kubelet[2717]: W0912 17:40:01.239970 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.240167 kubelet[2717]: E0912 17:40:01.240094 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.241065 kubelet[2717]: E0912 17:40:01.241023 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.241957 kubelet[2717]: W0912 17:40:01.241634 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.241957 kubelet[2717]: E0912 17:40:01.241672 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.242999 kubelet[2717]: E0912 17:40:01.242963 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.243717 kubelet[2717]: W0912 17:40:01.243687 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.244160 kubelet[2717]: E0912 17:40:01.244139 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.247917 kubelet[2717]: E0912 17:40:01.246747 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.247917 kubelet[2717]: W0912 17:40:01.246784 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.247917 kubelet[2717]: E0912 17:40:01.246821 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.247917 kubelet[2717]: E0912 17:40:01.247717 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.247917 kubelet[2717]: W0912 17:40:01.247734 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.247917 kubelet[2717]: E0912 17:40:01.247754 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.249062 kubelet[2717]: E0912 17:40:01.249038 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.252752 kubelet[2717]: W0912 17:40:01.251075 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.252752 kubelet[2717]: E0912 17:40:01.251149 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.252752 kubelet[2717]: E0912 17:40:01.251546 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.252752 kubelet[2717]: W0912 17:40:01.251563 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.252752 kubelet[2717]: E0912 17:40:01.251581 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.252752 kubelet[2717]: E0912 17:40:01.252386 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.252752 kubelet[2717]: W0912 17:40:01.252407 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.252752 kubelet[2717]: E0912 17:40:01.252426 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.252752 kubelet[2717]: E0912 17:40:01.252655 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.252752 kubelet[2717]: W0912 17:40:01.252665 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.253218 kubelet[2717]: E0912 17:40:01.252683 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.253501 kubelet[2717]: E0912 17:40:01.253477 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.253619 kubelet[2717]: W0912 17:40:01.253602 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.253784 kubelet[2717]: E0912 17:40:01.253670 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.258759 kubelet[2717]: E0912 17:40:01.258334 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.258759 kubelet[2717]: W0912 17:40:01.258370 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.258759 kubelet[2717]: E0912 17:40:01.258402 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.261522 kubelet[2717]: E0912 17:40:01.261037 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.261522 kubelet[2717]: W0912 17:40:01.261091 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.261522 kubelet[2717]: E0912 17:40:01.261122 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.265296 kubelet[2717]: E0912 17:40:01.264019 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.265296 kubelet[2717]: W0912 17:40:01.264056 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.265296 kubelet[2717]: E0912 17:40:01.264090 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.265296 kubelet[2717]: E0912 17:40:01.264572 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.265296 kubelet[2717]: W0912 17:40:01.264592 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.265296 kubelet[2717]: E0912 17:40:01.264635 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.265296 kubelet[2717]: E0912 17:40:01.265242 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.265296 kubelet[2717]: W0912 17:40:01.265258 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.265296 kubelet[2717]: E0912 17:40:01.265301 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.266256 kubelet[2717]: I0912 17:40:01.265350 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/95231497-8828-48f6-9eda-7b0dd9295eb8-socket-dir\") pod \"csi-node-driver-n244d\" (UID: \"95231497-8828-48f6-9eda-7b0dd9295eb8\") " pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:01.266256 kubelet[2717]: E0912 17:40:01.266102 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.266256 kubelet[2717]: W0912 17:40:01.266126 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.266256 kubelet[2717]: E0912 17:40:01.266153 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.266256 kubelet[2717]: I0912 17:40:01.266188 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/95231497-8828-48f6-9eda-7b0dd9295eb8-registration-dir\") pod \"csi-node-driver-n244d\" (UID: \"95231497-8828-48f6-9eda-7b0dd9295eb8\") " pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:01.268124 kubelet[2717]: E0912 17:40:01.267041 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.268124 kubelet[2717]: W0912 17:40:01.267066 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.268124 kubelet[2717]: E0912 17:40:01.267194 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.268124 kubelet[2717]: I0912 17:40:01.267225 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95231497-8828-48f6-9eda-7b0dd9295eb8-kubelet-dir\") pod \"csi-node-driver-n244d\" (UID: \"95231497-8828-48f6-9eda-7b0dd9295eb8\") " pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:01.268998 kubelet[2717]: E0912 17:40:01.268958 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.268998 kubelet[2717]: W0912 17:40:01.269000 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.269313 kubelet[2717]: E0912 17:40:01.269060 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.269313 kubelet[2717]: I0912 17:40:01.269119 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/95231497-8828-48f6-9eda-7b0dd9295eb8-varrun\") pod \"csi-node-driver-n244d\" (UID: \"95231497-8828-48f6-9eda-7b0dd9295eb8\") " pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:01.274145 kubelet[2717]: E0912 17:40:01.273536 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.274145 kubelet[2717]: W0912 17:40:01.273568 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.274145 kubelet[2717]: E0912 17:40:01.273658 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.275777 kubelet[2717]: E0912 17:40:01.275033 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.275777 kubelet[2717]: W0912 17:40:01.275065 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.275777 kubelet[2717]: E0912 17:40:01.275591 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.278284 kubelet[2717]: E0912 17:40:01.277960 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.278284 kubelet[2717]: W0912 17:40:01.278027 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.278284 kubelet[2717]: E0912 17:40:01.278215 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.278501 kubelet[2717]: E0912 17:40:01.278374 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.278501 kubelet[2717]: W0912 17:40:01.278390 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.279904 kubelet[2717]: E0912 17:40:01.278640 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.279904 kubelet[2717]: W0912 17:40:01.278836 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.279904 kubelet[2717]: E0912 17:40:01.278672 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.279904 kubelet[2717]: I0912 17:40:01.279030 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwfg\" (UniqueName: \"kubernetes.io/projected/95231497-8828-48f6-9eda-7b0dd9295eb8-kube-api-access-ckwfg\") pod \"csi-node-driver-n244d\" (UID: \"95231497-8828-48f6-9eda-7b0dd9295eb8\") " pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:01.279904 kubelet[2717]: E0912 17:40:01.279053 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.279904 kubelet[2717]: E0912 17:40:01.279270 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.279904 kubelet[2717]: W0912 17:40:01.279283 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.279904 kubelet[2717]: E0912 17:40:01.279300 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.279904 kubelet[2717]: E0912 17:40:01.279695 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.280285 kubelet[2717]: W0912 17:40:01.279711 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.281815 kubelet[2717]: E0912 17:40:01.280888 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.281815 kubelet[2717]: E0912 17:40:01.281234 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.281815 kubelet[2717]: W0912 17:40:01.281252 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.281815 kubelet[2717]: E0912 17:40:01.281271 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.284174 kubelet[2717]: E0912 17:40:01.283133 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.284174 kubelet[2717]: W0912 17:40:01.283165 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.284174 kubelet[2717]: E0912 17:40:01.283194 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.285655 kubelet[2717]: E0912 17:40:01.284990 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.285655 kubelet[2717]: W0912 17:40:01.285021 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.285655 kubelet[2717]: E0912 17:40:01.285052 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.288888 kubelet[2717]: E0912 17:40:01.288366 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.288888 kubelet[2717]: W0912 17:40:01.288404 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.288888 kubelet[2717]: E0912 17:40:01.288437 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.320887 containerd[1597]: time="2025-09-12T17:40:01.320383937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:01.321099 containerd[1597]: time="2025-09-12T17:40:01.320831310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:01.323418 containerd[1597]: time="2025-09-12T17:40:01.321809798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:01.324383 containerd[1597]: time="2025-09-12T17:40:01.324264064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:01.383203 kubelet[2717]: E0912 17:40:01.383127 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.384772 kubelet[2717]: W0912 17:40:01.383366 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.384772 kubelet[2717]: E0912 17:40:01.383403 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.385211 kubelet[2717]: E0912 17:40:01.385154 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.385211 kubelet[2717]: W0912 17:40:01.385176 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.385412 kubelet[2717]: E0912 17:40:01.385342 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.388888 kubelet[2717]: E0912 17:40:01.387778 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.388888 kubelet[2717]: W0912 17:40:01.387810 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.388888 kubelet[2717]: E0912 17:40:01.387844 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.390100 kubelet[2717]: E0912 17:40:01.390062 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.391028 kubelet[2717]: W0912 17:40:01.390973 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.392531 kubelet[2717]: E0912 17:40:01.391821 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.394663 kubelet[2717]: E0912 17:40:01.394343 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.394663 kubelet[2717]: W0912 17:40:01.394376 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.394904 kubelet[2717]: E0912 17:40:01.394692 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.395889 kubelet[2717]: E0912 17:40:01.395448 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.395889 kubelet[2717]: W0912 17:40:01.395475 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.395889 kubelet[2717]: E0912 17:40:01.395694 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.397402 kubelet[2717]: E0912 17:40:01.396804 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.397402 kubelet[2717]: W0912 17:40:01.396831 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.398907 kubelet[2717]: E0912 17:40:01.398585 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.398907 kubelet[2717]: W0912 17:40:01.398607 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.398907 kubelet[2717]: E0912 17:40:01.398710 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.398907 kubelet[2717]: E0912 17:40:01.398752 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.400322 kubelet[2717]: E0912 17:40:01.399617 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.400322 kubelet[2717]: W0912 17:40:01.399744 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.401325 kubelet[2717]: E0912 17:40:01.401148 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.401325 kubelet[2717]: W0912 17:40:01.401169 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.402743 kubelet[2717]: E0912 17:40:01.402500 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.402743 kubelet[2717]: W0912 17:40:01.402681 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.403381 kubelet[2717]: E0912 17:40:01.403311 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.403433 kubelet[2717]: E0912 17:40:01.403386 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.403433 kubelet[2717]: E0912 17:40:01.403406 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.405938 kubelet[2717]: E0912 17:40:01.405663 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.405938 kubelet[2717]: W0912 17:40:01.405737 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.411223 kubelet[2717]: E0912 17:40:01.409001 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.413571 kubelet[2717]: E0912 17:40:01.413355 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.415416 kubelet[2717]: W0912 17:40:01.415360 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.415722 kubelet[2717]: E0912 17:40:01.415632 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.422888 kubelet[2717]: E0912 17:40:01.420245 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.423619 kubelet[2717]: W0912 17:40:01.422558 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.425740 kubelet[2717]: E0912 17:40:01.425643 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.427546 kubelet[2717]: E0912 17:40:01.427499 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.427546 kubelet[2717]: W0912 17:40:01.427534 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.428728 kubelet[2717]: E0912 17:40:01.428077 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.430538 kubelet[2717]: E0912 17:40:01.430177 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.430538 kubelet[2717]: W0912 17:40:01.430235 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.430753 kubelet[2717]: E0912 17:40:01.430658 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.430753 kubelet[2717]: W0912 17:40:01.430705 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.431490 kubelet[2717]: E0912 17:40:01.431094 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.431490 kubelet[2717]: E0912 17:40:01.431186 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.431490 kubelet[2717]: E0912 17:40:01.431269 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.431490 kubelet[2717]: W0912 17:40:01.431279 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.432303 kubelet[2717]: E0912 17:40:01.432044 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.432303 kubelet[2717]: W0912 17:40:01.432070 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.432303 kubelet[2717]: E0912 17:40:01.432166 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.433375 kubelet[2717]: E0912 17:40:01.433293 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.434167 kubelet[2717]: E0912 17:40:01.433484 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.434167 kubelet[2717]: W0912 17:40:01.433515 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.434627 kubelet[2717]: E0912 17:40:01.434596 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.435291 kubelet[2717]: E0912 17:40:01.435255 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.435291 kubelet[2717]: W0912 17:40:01.435280 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.435664 kubelet[2717]: E0912 17:40:01.435637 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.435664 kubelet[2717]: W0912 17:40:01.435656 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.436068 kubelet[2717]: E0912 17:40:01.436019 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.436068 kubelet[2717]: W0912 17:40:01.436043 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436371 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.437288 kubelet[2717]: W0912 17:40:01.436392 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436565 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436614 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436636 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436656 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436788 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.437288 kubelet[2717]: W0912 17:40:01.436826 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.437288 kubelet[2717]: E0912 17:40:01.436874 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.465127 kubelet[2717]: E0912 17:40:01.465075 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:01.467292 kubelet[2717]: W0912 17:40:01.467101 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:01.467292 kubelet[2717]: E0912 17:40:01.467154 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:01.593572 containerd[1597]: time="2025-09-12T17:40:01.593501907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dwzbx,Uid:e2f18a05-6297-4f34-92d0-fc250acf58e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\"" Sep 12 17:40:03.114660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702475388.mount: Deactivated successfully. Sep 12 17:40:03.129486 kubelet[2717]: E0912 17:40:03.128238 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:05.128591 kubelet[2717]: E0912 17:40:05.128053 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:05.642997 containerd[1597]: time="2025-09-12T17:40:05.642931323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:05.648939 containerd[1597]: time="2025-09-12T17:40:05.647933228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:40:05.650435 containerd[1597]: time="2025-09-12T17:40:05.649677064Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:05.685906 containerd[1597]: time="2025-09-12T17:40:05.682528194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:05.686151 containerd[1597]: time="2025-09-12T17:40:05.685814778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.496585521s" Sep 12 17:40:05.687899 containerd[1597]: time="2025-09-12T17:40:05.686250642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:40:05.695221 containerd[1597]: time="2025-09-12T17:40:05.695151148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:40:05.764894 containerd[1597]: time="2025-09-12T17:40:05.763212391Z" level=info msg="CreateContainer within sandbox \"a1e7008126b63331b99a6c46f46fbf3e9a65959dc9904f1ad07ed20852c404f8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:40:05.834442 containerd[1597]: time="2025-09-12T17:40:05.834219506Z" level=info msg="CreateContainer within sandbox \"a1e7008126b63331b99a6c46f46fbf3e9a65959dc9904f1ad07ed20852c404f8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a33a39fa51a3b50c7796e3cef9a5962df77c7ac50430825bbd468aa82cdd49e\"" Sep 12 17:40:05.836814 containerd[1597]: time="2025-09-12T17:40:05.836646842Z" level=info msg="StartContainer for \"1a33a39fa51a3b50c7796e3cef9a5962df77c7ac50430825bbd468aa82cdd49e\"" Sep 12 17:40:06.049893 containerd[1597]: time="2025-09-12T17:40:06.049624403Z" level=info msg="StartContainer for \"1a33a39fa51a3b50c7796e3cef9a5962df77c7ac50430825bbd468aa82cdd49e\" returns successfully" Sep 12 17:40:06.319228 kubelet[2717]: E0912 17:40:06.317960 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:06.326557 kubelet[2717]: E0912 17:40:06.326497 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.326557 kubelet[2717]: W0912 17:40:06.326543 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.328302 kubelet[2717]: E0912 17:40:06.327882 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.328519 kubelet[2717]: E0912 17:40:06.328339 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.328519 kubelet[2717]: W0912 17:40:06.328354 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.328519 kubelet[2717]: E0912 17:40:06.328373 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.328676 kubelet[2717]: E0912 17:40:06.328600 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.328676 kubelet[2717]: W0912 17:40:06.328612 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.328676 kubelet[2717]: E0912 17:40:06.328628 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.329962 kubelet[2717]: E0912 17:40:06.328896 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.329962 kubelet[2717]: W0912 17:40:06.328910 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.329962 kubelet[2717]: E0912 17:40:06.328921 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.332896 kubelet[2717]: E0912 17:40:06.331327 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.332896 kubelet[2717]: W0912 17:40:06.331345 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.332896 kubelet[2717]: E0912 17:40:06.331367 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.332896 kubelet[2717]: E0912 17:40:06.331563 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.332896 kubelet[2717]: W0912 17:40:06.331573 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.332896 kubelet[2717]: E0912 17:40:06.331581 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.332896 kubelet[2717]: E0912 17:40:06.332148 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.332896 kubelet[2717]: W0912 17:40:06.332159 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.332896 kubelet[2717]: E0912 17:40:06.332172 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.333372 kubelet[2717]: E0912 17:40:06.332947 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.333372 kubelet[2717]: W0912 17:40:06.332959 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.333372 kubelet[2717]: E0912 17:40:06.332973 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.334893 kubelet[2717]: E0912 17:40:06.333782 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.334893 kubelet[2717]: W0912 17:40:06.333992 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.334893 kubelet[2717]: E0912 17:40:06.334017 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.334893 kubelet[2717]: E0912 17:40:06.334692 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.334893 kubelet[2717]: W0912 17:40:06.334703 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.334893 kubelet[2717]: E0912 17:40:06.334816 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.336432 kubelet[2717]: E0912 17:40:06.335422 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.336432 kubelet[2717]: W0912 17:40:06.335446 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.336432 kubelet[2717]: E0912 17:40:06.335463 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.337591 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.338727 kubelet[2717]: W0912 17:40:06.337623 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.337645 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.338056 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.338727 kubelet[2717]: W0912 17:40:06.338072 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.338094 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.338321 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.338727 kubelet[2717]: W0912 17:40:06.338334 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.338351 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.338727 kubelet[2717]: E0912 17:40:06.338535 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.339302 kubelet[2717]: W0912 17:40:06.338544 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.339302 kubelet[2717]: E0912 17:40:06.338553 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.357961 kubelet[2717]: E0912 17:40:06.357604 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.360142 kubelet[2717]: W0912 17:40:06.357647 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.360142 kubelet[2717]: E0912 17:40:06.359831 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.360642 kubelet[2717]: E0912 17:40:06.360512 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.363186 kubelet[2717]: W0912 17:40:06.362931 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.363186 kubelet[2717]: E0912 17:40:06.363018 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.363985 kubelet[2717]: E0912 17:40:06.363828 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.363985 kubelet[2717]: W0912 17:40:06.363886 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.364326 kubelet[2717]: E0912 17:40:06.364285 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.366139 kubelet[2717]: E0912 17:40:06.365138 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.366139 kubelet[2717]: W0912 17:40:06.365191 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.368161 kubelet[2717]: E0912 17:40:06.366516 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.368491 kubelet[2717]: E0912 17:40:06.368463 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.368607 kubelet[2717]: W0912 17:40:06.368586 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.370046 kubelet[2717]: E0912 17:40:06.369934 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.372871 kubelet[2717]: E0912 17:40:06.371533 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.372871 kubelet[2717]: W0912 17:40:06.371566 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.372871 kubelet[2717]: E0912 17:40:06.371632 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.378025 kubelet[2717]: E0912 17:40:06.377974 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.378461 kubelet[2717]: W0912 17:40:06.378226 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.378461 kubelet[2717]: E0912 17:40:06.378330 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.380741 kubelet[2717]: E0912 17:40:06.380697 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.382451 kubelet[2717]: W0912 17:40:06.382006 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.385845 kubelet[2717]: I0912 17:40:06.385206 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64d94c9b4b-zvbsv" podStartSLOduration=1.881032762 podStartE2EDuration="6.385174853s" podCreationTimestamp="2025-09-12 17:40:00 +0000 UTC" firstStartedPulling="2025-09-12 17:40:01.18799146 +0000 UTC m=+20.315887127" lastFinishedPulling="2025-09-12 17:40:05.692133547 +0000 UTC m=+24.820029218" observedRunningTime="2025-09-12 17:40:06.373971364 +0000 UTC m=+25.501867061" watchObservedRunningTime="2025-09-12 17:40:06.385174853 +0000 UTC m=+25.513070550" Sep 12 17:40:06.385845 kubelet[2717]: E0912 17:40:06.385746 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.388751 kubelet[2717]: E0912 17:40:06.387076 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.389479 kubelet[2717]: W0912 17:40:06.389191 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.389479 kubelet[2717]: E0912 17:40:06.389290 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.389989 kubelet[2717]: E0912 17:40:06.389962 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.392066 kubelet[2717]: W0912 17:40:06.391913 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.392528 kubelet[2717]: E0912 17:40:06.392491 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.393176 kubelet[2717]: E0912 17:40:06.393120 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.393176 kubelet[2717]: W0912 17:40:06.393144 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.395995 kubelet[2717]: E0912 17:40:06.394640 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.395995 kubelet[2717]: E0912 17:40:06.394985 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.395995 kubelet[2717]: W0912 17:40:06.395010 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.395995 kubelet[2717]: E0912 17:40:06.395074 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.399020 kubelet[2717]: E0912 17:40:06.398494 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.400272 kubelet[2717]: W0912 17:40:06.399981 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.400272 kubelet[2717]: E0912 17:40:06.400070 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.406249 kubelet[2717]: E0912 17:40:06.405482 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.407130 kubelet[2717]: W0912 17:40:06.406496 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.408690 kubelet[2717]: E0912 17:40:06.407433 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.409386 kubelet[2717]: E0912 17:40:06.409193 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.409386 kubelet[2717]: W0912 17:40:06.409225 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.410462 kubelet[2717]: E0912 17:40:06.410428 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.412408 kubelet[2717]: W0912 17:40:06.410959 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.412408 kubelet[2717]: E0912 17:40:06.411010 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.412408 kubelet[2717]: E0912 17:40:06.411064 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.414294 kubelet[2717]: E0912 17:40:06.414260 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.414465 kubelet[2717]: W0912 17:40:06.414440 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.415316 kubelet[2717]: E0912 17:40:06.414613 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.418081 kubelet[2717]: E0912 17:40:06.418036 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:06.418416 kubelet[2717]: W0912 17:40:06.418271 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:06.418416 kubelet[2717]: E0912 17:40:06.418317 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:06.734144 systemd[1]: run-containerd-runc-k8s.io-1a33a39fa51a3b50c7796e3cef9a5962df77c7ac50430825bbd468aa82cdd49e-runc.c6aKCE.mount: Deactivated successfully. Sep 12 17:40:07.126807 kubelet[2717]: E0912 17:40:07.126716 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:07.320377 kubelet[2717]: E0912 17:40:07.319536 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:07.348296 kubelet[2717]: E0912 17:40:07.348157 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.348296 kubelet[2717]: W0912 17:40:07.348223 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.348296 kubelet[2717]: E0912 17:40:07.348274 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.350408 kubelet[2717]: E0912 17:40:07.349916 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.350408 kubelet[2717]: W0912 17:40:07.349945 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.350408 kubelet[2717]: E0912 17:40:07.349972 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.354605 kubelet[2717]: E0912 17:40:07.352946 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.354605 kubelet[2717]: W0912 17:40:07.353955 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.354605 kubelet[2717]: E0912 17:40:07.354023 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.356092 kubelet[2717]: E0912 17:40:07.355719 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.356092 kubelet[2717]: W0912 17:40:07.355747 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.356092 kubelet[2717]: E0912 17:40:07.355773 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.359252 kubelet[2717]: E0912 17:40:07.359209 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.359252 kubelet[2717]: W0912 17:40:07.359239 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.359252 kubelet[2717]: E0912 17:40:07.359268 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.362280 kubelet[2717]: E0912 17:40:07.362213 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.363154 kubelet[2717]: W0912 17:40:07.363017 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.363154 kubelet[2717]: E0912 17:40:07.363073 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.366190 kubelet[2717]: E0912 17:40:07.365943 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.366190 kubelet[2717]: W0912 17:40:07.365979 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.366190 kubelet[2717]: E0912 17:40:07.366012 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.372673 kubelet[2717]: E0912 17:40:07.372622 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.372673 kubelet[2717]: W0912 17:40:07.372658 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.372673 kubelet[2717]: E0912 17:40:07.372689 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.377601 kubelet[2717]: E0912 17:40:07.377287 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.377601 kubelet[2717]: W0912 17:40:07.377334 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.377601 kubelet[2717]: E0912 17:40:07.377370 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.381965 kubelet[2717]: E0912 17:40:07.381785 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.381965 kubelet[2717]: W0912 17:40:07.381829 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.382651 kubelet[2717]: E0912 17:40:07.382294 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.384971 kubelet[2717]: E0912 17:40:07.384795 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.384971 kubelet[2717]: W0912 17:40:07.384827 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.385516 kubelet[2717]: E0912 17:40:07.385161 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.386715 kubelet[2717]: E0912 17:40:07.386565 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.387011 kubelet[2717]: W0912 17:40:07.386798 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.387694 kubelet[2717]: E0912 17:40:07.386841 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.388812 kubelet[2717]: E0912 17:40:07.388749 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.388812 kubelet[2717]: W0912 17:40:07.388786 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.388812 kubelet[2717]: E0912 17:40:07.388812 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.390338 kubelet[2717]: E0912 17:40:07.389991 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.390338 kubelet[2717]: W0912 17:40:07.390013 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.390338 kubelet[2717]: E0912 17:40:07.390036 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.390633 kubelet[2717]: E0912 17:40:07.390617 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.390633 kubelet[2717]: W0912 17:40:07.390631 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.390731 kubelet[2717]: E0912 17:40:07.390643 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.391575 kubelet[2717]: E0912 17:40:07.391553 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.391575 kubelet[2717]: W0912 17:40:07.391571 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.392921 kubelet[2717]: E0912 17:40:07.391586 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.393117 kubelet[2717]: E0912 17:40:07.392962 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.393117 kubelet[2717]: W0912 17:40:07.392988 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.393117 kubelet[2717]: E0912 17:40:07.393008 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.393323 kubelet[2717]: E0912 17:40:07.393298 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.393382 kubelet[2717]: W0912 17:40:07.393370 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.393445 kubelet[2717]: E0912 17:40:07.393432 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.393960 kubelet[2717]: E0912 17:40:07.393940 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.394333 kubelet[2717]: W0912 17:40:07.394171 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.394333 kubelet[2717]: E0912 17:40:07.394206 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.394640 kubelet[2717]: E0912 17:40:07.394619 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.394684 kubelet[2717]: W0912 17:40:07.394643 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.394684 kubelet[2717]: E0912 17:40:07.394672 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.396280 kubelet[2717]: E0912 17:40:07.395937 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.396280 kubelet[2717]: W0912 17:40:07.395974 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.398139 kubelet[2717]: E0912 17:40:07.397313 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.398540 kubelet[2717]: E0912 17:40:07.398491 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.398540 kubelet[2717]: W0912 17:40:07.398522 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.399472 kubelet[2717]: E0912 17:40:07.399174 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.400027 kubelet[2717]: E0912 17:40:07.399900 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.400027 kubelet[2717]: W0912 17:40:07.399924 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.400834 kubelet[2717]: E0912 17:40:07.400180 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.400834 kubelet[2717]: W0912 17:40:07.400198 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.400834 kubelet[2717]: E0912 17:40:07.400410 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.400834 kubelet[2717]: E0912 17:40:07.400537 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.402158 kubelet[2717]: E0912 17:40:07.402006 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.402158 kubelet[2717]: W0912 17:40:07.402034 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.402158 kubelet[2717]: E0912 17:40:07.402079 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.403181 kubelet[2717]: E0912 17:40:07.403153 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.403181 kubelet[2717]: W0912 17:40:07.403172 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.404289 kubelet[2717]: E0912 17:40:07.403349 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.404289 kubelet[2717]: W0912 17:40:07.403357 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.404289 kubelet[2717]: E0912 17:40:07.403440 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.404289 kubelet[2717]: E0912 17:40:07.403704 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.406257 kubelet[2717]: E0912 17:40:07.405363 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.406257 kubelet[2717]: W0912 17:40:07.405391 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.406257 kubelet[2717]: E0912 17:40:07.405437 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.410241 kubelet[2717]: E0912 17:40:07.410028 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.410241 kubelet[2717]: W0912 17:40:07.410068 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.410241 kubelet[2717]: E0912 17:40:07.410405 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.415286 kubelet[2717]: E0912 17:40:07.415235 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.415923 kubelet[2717]: W0912 17:40:07.415662 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.415923 kubelet[2717]: E0912 17:40:07.415819 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.417379 kubelet[2717]: E0912 17:40:07.417359 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.417464 kubelet[2717]: W0912 17:40:07.417450 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.417527 kubelet[2717]: E0912 17:40:07.417516 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.419290 kubelet[2717]: E0912 17:40:07.418553 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.419290 kubelet[2717]: W0912 17:40:07.418596 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.419290 kubelet[2717]: E0912 17:40:07.418637 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.420319 kubelet[2717]: E0912 17:40:07.420282 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:40:07.420532 kubelet[2717]: W0912 17:40:07.420449 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:40:07.420532 kubelet[2717]: E0912 17:40:07.420481 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:40:07.606566 containerd[1597]: time="2025-09-12T17:40:07.606496677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:07.608706 containerd[1597]: time="2025-09-12T17:40:07.608632633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:40:07.610078 containerd[1597]: time="2025-09-12T17:40:07.610016688Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:07.614702 containerd[1597]: time="2025-09-12T17:40:07.614504521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:07.615777 containerd[1597]: time="2025-09-12T17:40:07.615723306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.920511916s" Sep 12 17:40:07.616143 containerd[1597]: time="2025-09-12T17:40:07.615787766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:40:07.622753 containerd[1597]: time="2025-09-12T17:40:07.622569963Z" level=info msg="CreateContainer within sandbox \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:40:07.672198 containerd[1597]: time="2025-09-12T17:40:07.671980537Z" level=info msg="CreateContainer within sandbox \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f9298c1bc5bdb3906b4ccb3583af09ec71b8c1d140085becde74f2d7af780bf\"" Sep 12 17:40:07.675707 containerd[1597]: time="2025-09-12T17:40:07.675644024Z" level=info msg="StartContainer for \"9f9298c1bc5bdb3906b4ccb3583af09ec71b8c1d140085becde74f2d7af780bf\"" Sep 12 17:40:07.794902 containerd[1597]: time="2025-09-12T17:40:07.792837234Z" level=info msg="StartContainer for \"9f9298c1bc5bdb3906b4ccb3583af09ec71b8c1d140085becde74f2d7af780bf\" returns successfully" Sep 12 17:40:07.861310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f9298c1bc5bdb3906b4ccb3583af09ec71b8c1d140085becde74f2d7af780bf-rootfs.mount: Deactivated successfully. Sep 12 17:40:07.925011 containerd[1597]: time="2025-09-12T17:40:07.873265405Z" level=info msg="shim disconnected" id=9f9298c1bc5bdb3906b4ccb3583af09ec71b8c1d140085becde74f2d7af780bf namespace=k8s.io Sep 12 17:40:07.925499 containerd[1597]: time="2025-09-12T17:40:07.925233790Z" level=warning msg="cleaning up after shim disconnected" id=9f9298c1bc5bdb3906b4ccb3583af09ec71b8c1d140085becde74f2d7af780bf namespace=k8s.io Sep 12 17:40:07.925499 containerd[1597]: time="2025-09-12T17:40:07.925274387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:08.325731 kubelet[2717]: E0912 17:40:08.323689 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:08.330057 containerd[1597]: time="2025-09-12T17:40:08.329482073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:40:09.128058 kubelet[2717]: E0912 17:40:09.126660 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:11.128543 kubelet[2717]: E0912 17:40:11.128483 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:13.130958 kubelet[2717]: E0912 17:40:13.128478 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:13.159907 containerd[1597]: time="2025-09-12T17:40:13.159381273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:13.162459 containerd[1597]: time="2025-09-12T17:40:13.162386933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:40:13.164974 containerd[1597]: time="2025-09-12T17:40:13.164408827Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:13.169209 containerd[1597]: time="2025-09-12T17:40:13.167936922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:13.169209 containerd[1597]: time="2025-09-12T17:40:13.168022785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.83841745s" Sep 12 17:40:13.169209 containerd[1597]: time="2025-09-12T17:40:13.168058471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:40:13.174322 containerd[1597]: time="2025-09-12T17:40:13.174266685Z" level=info msg="CreateContainer within sandbox \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:40:13.197362 containerd[1597]: time="2025-09-12T17:40:13.197264068Z" level=info msg="CreateContainer within sandbox \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7e0a828b8e8c7d76a3d82d7915d90402bb7540a2cf8a06255134ec92334123e\"" Sep 12 17:40:13.198804 containerd[1597]: time="2025-09-12T17:40:13.198148787Z" level=info msg="StartContainer for \"d7e0a828b8e8c7d76a3d82d7915d90402bb7540a2cf8a06255134ec92334123e\"" Sep 12 17:40:13.303193 containerd[1597]: time="2025-09-12T17:40:13.303117913Z" level=info msg="StartContainer for \"d7e0a828b8e8c7d76a3d82d7915d90402bb7540a2cf8a06255134ec92334123e\" returns successfully" Sep 12 17:40:14.160528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7e0a828b8e8c7d76a3d82d7915d90402bb7540a2cf8a06255134ec92334123e-rootfs.mount: Deactivated successfully. Sep 12 17:40:14.168619 containerd[1597]: time="2025-09-12T17:40:14.167736977Z" level=info msg="shim disconnected" id=d7e0a828b8e8c7d76a3d82d7915d90402bb7540a2cf8a06255134ec92334123e namespace=k8s.io Sep 12 17:40:14.168619 containerd[1597]: time="2025-09-12T17:40:14.167812565Z" level=warning msg="cleaning up after shim disconnected" id=d7e0a828b8e8c7d76a3d82d7915d90402bb7540a2cf8a06255134ec92334123e namespace=k8s.io Sep 12 17:40:14.168619 containerd[1597]: time="2025-09-12T17:40:14.167826356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:14.199674 kubelet[2717]: I0912 17:40:14.199632 2717 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:40:14.365867 containerd[1597]: time="2025-09-12T17:40:14.365748053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:40:14.398426 kubelet[2717]: I0912 17:40:14.398362 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-backend-key-pair\") pod \"whisker-674688f4df-xgdkq\" (UID: \"245df9e2-1f49-4d73-9c68-1b8c92656c80\") " pod="calico-system/whisker-674688f4df-xgdkq" Sep 12 17:40:14.398426 kubelet[2717]: I0912 17:40:14.398429 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b1dbfd9-0d2b-4616-9684-f70423a56727-tigera-ca-bundle\") pod \"calico-kube-controllers-7795d66b4-bkptg\" (UID: \"4b1dbfd9-0d2b-4616-9684-f70423a56727\") " pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" Sep 12 17:40:14.398635 kubelet[2717]: I0912 17:40:14.398461 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d82ad19b-3db7-4b09-ad0d-31652f615ba5-config-volume\") pod \"coredns-7c65d6cfc9-mpsh9\" (UID: \"d82ad19b-3db7-4b09-ad0d-31652f615ba5\") " pod="kube-system/coredns-7c65d6cfc9-mpsh9" Sep 12 17:40:14.398635 kubelet[2717]: I0912 17:40:14.398489 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srtkt\" (UniqueName: \"kubernetes.io/projected/d82ad19b-3db7-4b09-ad0d-31652f615ba5-kube-api-access-srtkt\") pod \"coredns-7c65d6cfc9-mpsh9\" (UID: \"d82ad19b-3db7-4b09-ad0d-31652f615ba5\") " pod="kube-system/coredns-7c65d6cfc9-mpsh9" Sep 12 17:40:14.398635 kubelet[2717]: I0912 17:40:14.398517 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts68q\" (UniqueName: \"kubernetes.io/projected/6bb2647e-616b-4a7b-a0a3-710344efe361-kube-api-access-ts68q\") pod \"calico-apiserver-5bd46c69b9-pfsld\" (UID: \"6bb2647e-616b-4a7b-a0a3-710344efe361\") " pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" Sep 12 17:40:14.398635 kubelet[2717]: I0912 17:40:14.398546 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn7r9\" (UniqueName: \"kubernetes.io/projected/d624f012-d50d-4bd5-9261-9b5b725646ef-kube-api-access-hn7r9\") pod \"goldmane-7988f88666-8q4cb\" (UID: \"d624f012-d50d-4bd5-9261-9b5b725646ef\") " pod="calico-system/goldmane-7988f88666-8q4cb" Sep 12 17:40:14.398635 kubelet[2717]: I0912 17:40:14.398571 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2tdr\" (UniqueName: \"kubernetes.io/projected/beb36fc2-c828-43e2-90d6-9cffbe7e8f94-kube-api-access-b2tdr\") pod \"coredns-7c65d6cfc9-vsvr8\" (UID: \"beb36fc2-c828-43e2-90d6-9cffbe7e8f94\") " pod="kube-system/coredns-7c65d6cfc9-vsvr8" Sep 12 17:40:14.398777 kubelet[2717]: I0912 17:40:14.398596 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d624f012-d50d-4bd5-9261-9b5b725646ef-config\") pod \"goldmane-7988f88666-8q4cb\" (UID: \"d624f012-d50d-4bd5-9261-9b5b725646ef\") " pod="calico-system/goldmane-7988f88666-8q4cb" Sep 12 17:40:14.398777 kubelet[2717]: I0912 17:40:14.398624 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnwn\" (UniqueName: \"kubernetes.io/projected/4b1dbfd9-0d2b-4616-9684-f70423a56727-kube-api-access-sdnwn\") pod \"calico-kube-controllers-7795d66b4-bkptg\" (UID: \"4b1dbfd9-0d2b-4616-9684-f70423a56727\") " pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" Sep 12 17:40:14.398777 kubelet[2717]: I0912 17:40:14.398658 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6bb2647e-616b-4a7b-a0a3-710344efe361-calico-apiserver-certs\") pod \"calico-apiserver-5bd46c69b9-pfsld\" (UID: \"6bb2647e-616b-4a7b-a0a3-710344efe361\") " pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" Sep 12 17:40:14.398777 kubelet[2717]: I0912 17:40:14.398686 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzs7j\" (UniqueName: \"kubernetes.io/projected/8a573f69-5286-4c75-b8cd-d7019d8e8a47-kube-api-access-lzs7j\") pod \"calico-apiserver-5bd46c69b9-zwwnf\" (UID: \"8a573f69-5286-4c75-b8cd-d7019d8e8a47\") " pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" Sep 12 17:40:14.398777 kubelet[2717]: I0912 17:40:14.398717 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a573f69-5286-4c75-b8cd-d7019d8e8a47-calico-apiserver-certs\") pod \"calico-apiserver-5bd46c69b9-zwwnf\" (UID: \"8a573f69-5286-4c75-b8cd-d7019d8e8a47\") " pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" Sep 12 17:40:14.399088 kubelet[2717]: I0912 17:40:14.398741 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7hk\" (UniqueName: \"kubernetes.io/projected/245df9e2-1f49-4d73-9c68-1b8c92656c80-kube-api-access-mr7hk\") pod \"whisker-674688f4df-xgdkq\" (UID: \"245df9e2-1f49-4d73-9c68-1b8c92656c80\") " pod="calico-system/whisker-674688f4df-xgdkq" Sep 12 17:40:14.400238 kubelet[2717]: I0912 17:40:14.400058 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-ca-bundle\") pod \"whisker-674688f4df-xgdkq\" (UID: \"245df9e2-1f49-4d73-9c68-1b8c92656c80\") " pod="calico-system/whisker-674688f4df-xgdkq" Sep 12 17:40:14.400238 kubelet[2717]: I0912 17:40:14.400112 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beb36fc2-c828-43e2-90d6-9cffbe7e8f94-config-volume\") pod \"coredns-7c65d6cfc9-vsvr8\" (UID: \"beb36fc2-c828-43e2-90d6-9cffbe7e8f94\") " pod="kube-system/coredns-7c65d6cfc9-vsvr8" Sep 12 17:40:14.400238 kubelet[2717]: I0912 17:40:14.400146 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d624f012-d50d-4bd5-9261-9b5b725646ef-goldmane-ca-bundle\") pod \"goldmane-7988f88666-8q4cb\" (UID: \"d624f012-d50d-4bd5-9261-9b5b725646ef\") " pod="calico-system/goldmane-7988f88666-8q4cb" Sep 12 17:40:14.400238 kubelet[2717]: I0912 17:40:14.400178 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d624f012-d50d-4bd5-9261-9b5b725646ef-goldmane-key-pair\") pod \"goldmane-7988f88666-8q4cb\" (UID: \"d624f012-d50d-4bd5-9261-9b5b725646ef\") " pod="calico-system/goldmane-7988f88666-8q4cb" Sep 12 17:40:14.597244 containerd[1597]: time="2025-09-12T17:40:14.597176389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8q4cb,Uid:d624f012-d50d-4bd5-9261-9b5b725646ef,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:14.786306 containerd[1597]: time="2025-09-12T17:40:14.786211727Z" level=error msg="Failed to destroy network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:14.793459 containerd[1597]: time="2025-09-12T17:40:14.793309379Z" level=error msg="encountered an error cleaning up failed sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:14.801800 containerd[1597]: time="2025-09-12T17:40:14.801712786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8q4cb,Uid:d624f012-d50d-4bd5-9261-9b5b725646ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:14.806941 kubelet[2717]: E0912 17:40:14.806884 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:14.807792 kubelet[2717]: E0912 17:40:14.807292 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-8q4cb" Sep 12 17:40:14.807792 kubelet[2717]: E0912 17:40:14.807345 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-8q4cb" Sep 12 17:40:14.807792 kubelet[2717]: E0912 17:40:14.807429 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-8q4cb_calico-system(d624f012-d50d-4bd5-9261-9b5b725646ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-8q4cb_calico-system(d624f012-d50d-4bd5-9261-9b5b725646ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-8q4cb" podUID="d624f012-d50d-4bd5-9261-9b5b725646ef" Sep 12 17:40:14.843471 kubelet[2717]: E0912 17:40:14.841217 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:14.844336 containerd[1597]: time="2025-09-12T17:40:14.844268821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mpsh9,Uid:d82ad19b-3db7-4b09-ad0d-31652f615ba5,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:14.861773 containerd[1597]: time="2025-09-12T17:40:14.861372671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795d66b4-bkptg,Uid:4b1dbfd9-0d2b-4616-9684-f70423a56727,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:14.865467 kubelet[2717]: E0912 17:40:14.865264 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:14.872839 containerd[1597]: time="2025-09-12T17:40:14.872244310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vsvr8,Uid:beb36fc2-c828-43e2-90d6-9cffbe7e8f94,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:14.875784 containerd[1597]: time="2025-09-12T17:40:14.875026760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-pfsld,Uid:6bb2647e-616b-4a7b-a0a3-710344efe361,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:14.887461 containerd[1597]: time="2025-09-12T17:40:14.887403532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-zwwnf,Uid:8a573f69-5286-4c75-b8cd-d7019d8e8a47,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:14.890601 containerd[1597]: time="2025-09-12T17:40:14.890484410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674688f4df-xgdkq,Uid:245df9e2-1f49-4d73-9c68-1b8c92656c80,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:15.084842 containerd[1597]: time="2025-09-12T17:40:15.084759190Z" level=error msg="Failed to destroy network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.088514 containerd[1597]: time="2025-09-12T17:40:15.088335644Z" level=error msg="encountered an error cleaning up failed sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.088514 containerd[1597]: time="2025-09-12T17:40:15.088454928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mpsh9,Uid:d82ad19b-3db7-4b09-ad0d-31652f615ba5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.088906 kubelet[2717]: E0912 17:40:15.088823 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.088998 kubelet[2717]: E0912 17:40:15.088965 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mpsh9" Sep 12 17:40:15.089037 kubelet[2717]: E0912 17:40:15.089007 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mpsh9" Sep 12 17:40:15.089219 kubelet[2717]: E0912 17:40:15.089084 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mpsh9_kube-system(d82ad19b-3db7-4b09-ad0d-31652f615ba5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mpsh9_kube-system(d82ad19b-3db7-4b09-ad0d-31652f615ba5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mpsh9" podUID="d82ad19b-3db7-4b09-ad0d-31652f615ba5" Sep 12 17:40:15.118149 containerd[1597]: time="2025-09-12T17:40:15.117941219Z" level=error msg="Failed to destroy network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.119131 containerd[1597]: time="2025-09-12T17:40:15.119084475Z" level=error msg="encountered an error cleaning up failed sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.119242 containerd[1597]: time="2025-09-12T17:40:15.119221862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795d66b4-bkptg,Uid:4b1dbfd9-0d2b-4616-9684-f70423a56727,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.122774 kubelet[2717]: E0912 17:40:15.119551 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.122774 kubelet[2717]: E0912 17:40:15.119641 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" Sep 12 17:40:15.122774 kubelet[2717]: E0912 17:40:15.119663 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" Sep 12 17:40:15.123011 kubelet[2717]: E0912 17:40:15.119728 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7795d66b4-bkptg_calico-system(4b1dbfd9-0d2b-4616-9684-f70423a56727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7795d66b4-bkptg_calico-system(4b1dbfd9-0d2b-4616-9684-f70423a56727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" podUID="4b1dbfd9-0d2b-4616-9684-f70423a56727" Sep 12 17:40:15.135475 containerd[1597]: time="2025-09-12T17:40:15.135223510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n244d,Uid:95231497-8828-48f6-9eda-7b0dd9295eb8,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:15.263617 containerd[1597]: time="2025-09-12T17:40:15.263507967Z" level=error msg="Failed to destroy network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.267119 containerd[1597]: time="2025-09-12T17:40:15.267051721Z" level=error msg="Failed to destroy network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.267752 containerd[1597]: time="2025-09-12T17:40:15.267688672Z" level=error msg="encountered an error cleaning up failed sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.268998 containerd[1597]: time="2025-09-12T17:40:15.267931499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-pfsld,Uid:6bb2647e-616b-4a7b-a0a3-710344efe361,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.269653 kubelet[2717]: E0912 17:40:15.269584 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.270208 kubelet[2717]: E0912 17:40:15.269690 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" Sep 12 17:40:15.270208 kubelet[2717]: E0912 17:40:15.269720 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" Sep 12 17:40:15.270208 kubelet[2717]: E0912 17:40:15.269775 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd46c69b9-pfsld_calico-apiserver(6bb2647e-616b-4a7b-a0a3-710344efe361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd46c69b9-pfsld_calico-apiserver(6bb2647e-616b-4a7b-a0a3-710344efe361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" podUID="6bb2647e-616b-4a7b-a0a3-710344efe361" Sep 12 17:40:15.273055 containerd[1597]: time="2025-09-12T17:40:15.272702571Z" level=error msg="encountered an error cleaning up failed sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.273055 containerd[1597]: time="2025-09-12T17:40:15.272986892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-zwwnf,Uid:8a573f69-5286-4c75-b8cd-d7019d8e8a47,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.275233 kubelet[2717]: E0912 17:40:15.273893 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.275374 kubelet[2717]: E0912 17:40:15.275238 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" Sep 12 17:40:15.275374 kubelet[2717]: E0912 17:40:15.275263 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" Sep 12 17:40:15.278216 kubelet[2717]: E0912 17:40:15.277936 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bd46c69b9-zwwnf_calico-apiserver(8a573f69-5286-4c75-b8cd-d7019d8e8a47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bd46c69b9-zwwnf_calico-apiserver(8a573f69-5286-4c75-b8cd-d7019d8e8a47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" podUID="8a573f69-5286-4c75-b8cd-d7019d8e8a47" Sep 12 17:40:15.284373 containerd[1597]: time="2025-09-12T17:40:15.284216755Z" level=error msg="Failed to destroy network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.285379 containerd[1597]: time="2025-09-12T17:40:15.285190108Z" level=error msg="encountered an error cleaning up failed sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.285529 containerd[1597]: time="2025-09-12T17:40:15.285339309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vsvr8,Uid:beb36fc2-c828-43e2-90d6-9cffbe7e8f94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.286028 kubelet[2717]: E0912 17:40:15.285890 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.286028 kubelet[2717]: E0912 17:40:15.285962 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vsvr8" Sep 12 17:40:15.286028 kubelet[2717]: E0912 17:40:15.285985 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vsvr8" Sep 12 17:40:15.286624 kubelet[2717]: E0912 17:40:15.286042 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vsvr8_kube-system(beb36fc2-c828-43e2-90d6-9cffbe7e8f94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vsvr8_kube-system(beb36fc2-c828-43e2-90d6-9cffbe7e8f94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vsvr8" podUID="beb36fc2-c828-43e2-90d6-9cffbe7e8f94" Sep 12 17:40:15.292844 containerd[1597]: time="2025-09-12T17:40:15.292758913Z" level=error msg="Failed to destroy network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.294114 containerd[1597]: time="2025-09-12T17:40:15.293886697Z" level=error msg="encountered an error cleaning up failed sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.294114 containerd[1597]: time="2025-09-12T17:40:15.293976805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674688f4df-xgdkq,Uid:245df9e2-1f49-4d73-9c68-1b8c92656c80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.295065 kubelet[2717]: E0912 17:40:15.294555 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.295065 kubelet[2717]: E0912 17:40:15.294649 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674688f4df-xgdkq" Sep 12 17:40:15.295065 kubelet[2717]: E0912 17:40:15.294681 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674688f4df-xgdkq" Sep 12 17:40:15.295276 kubelet[2717]: E0912 17:40:15.294742 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-674688f4df-xgdkq_calico-system(245df9e2-1f49-4d73-9c68-1b8c92656c80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-674688f4df-xgdkq_calico-system(245df9e2-1f49-4d73-9c68-1b8c92656c80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674688f4df-xgdkq" podUID="245df9e2-1f49-4d73-9c68-1b8c92656c80" Sep 12 17:40:15.334918 containerd[1597]: time="2025-09-12T17:40:15.334711787Z" level=error msg="Failed to destroy network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.335784 containerd[1597]: time="2025-09-12T17:40:15.335620390Z" level=error msg="encountered an error cleaning up failed sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.335784 containerd[1597]: time="2025-09-12T17:40:15.335710868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n244d,Uid:95231497-8828-48f6-9eda-7b0dd9295eb8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.336809 kubelet[2717]: E0912 17:40:15.336252 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.336809 kubelet[2717]: E0912 17:40:15.336339 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:15.336809 kubelet[2717]: E0912 17:40:15.336374 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n244d" Sep 12 17:40:15.337096 kubelet[2717]: E0912 17:40:15.336453 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n244d_calico-system(95231497-8828-48f6-9eda-7b0dd9295eb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n244d_calico-system(95231497-8828-48f6-9eda-7b0dd9295eb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:15.368554 kubelet[2717]: I0912 17:40:15.368408 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:15.374063 kubelet[2717]: I0912 17:40:15.374026 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:15.376843 containerd[1597]: time="2025-09-12T17:40:15.376683121Z" level=info msg="StopPodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\"" Sep 12 17:40:15.382326 containerd[1597]: time="2025-09-12T17:40:15.379600587Z" level=info msg="StopPodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\"" Sep 12 17:40:15.383265 containerd[1597]: time="2025-09-12T17:40:15.383172106Z" level=info msg="Ensure that sandbox 0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be in task-service has been cleanup successfully" Sep 12 17:40:15.386004 containerd[1597]: time="2025-09-12T17:40:15.385801096Z" level=info msg="Ensure that sandbox 4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3 in task-service has been cleanup successfully" Sep 12 17:40:15.390553 kubelet[2717]: I0912 17:40:15.388041 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:15.391447 containerd[1597]: time="2025-09-12T17:40:15.391397978Z" level=info msg="StopPodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\"" Sep 12 17:40:15.393249 containerd[1597]: time="2025-09-12T17:40:15.392874022Z" level=info msg="Ensure that sandbox 71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567 in task-service has been cleanup successfully" Sep 12 17:40:15.396647 kubelet[2717]: I0912 17:40:15.396608 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:15.397528 containerd[1597]: time="2025-09-12T17:40:15.397479416Z" level=info msg="StopPodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\"" Sep 12 17:40:15.398444 containerd[1597]: time="2025-09-12T17:40:15.397766766Z" level=info msg="Ensure that sandbox bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102 in task-service has been cleanup successfully" Sep 12 17:40:15.415061 kubelet[2717]: I0912 17:40:15.415000 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:15.417765 containerd[1597]: time="2025-09-12T17:40:15.417581801Z" level=info msg="StopPodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\"" Sep 12 17:40:15.418930 containerd[1597]: time="2025-09-12T17:40:15.417839030Z" level=info msg="Ensure that sandbox 6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41 in task-service has been cleanup successfully" Sep 12 17:40:15.425917 kubelet[2717]: I0912 17:40:15.424794 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:15.430237 containerd[1597]: time="2025-09-12T17:40:15.427927490Z" level=info msg="StopPodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\"" Sep 12 17:40:15.430237 containerd[1597]: time="2025-09-12T17:40:15.428196819Z" level=info msg="Ensure that sandbox 4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309 in task-service has been cleanup successfully" Sep 12 17:40:15.442463 kubelet[2717]: I0912 17:40:15.442421 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:15.444564 containerd[1597]: time="2025-09-12T17:40:15.444503197Z" level=info msg="StopPodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\"" Sep 12 17:40:15.448410 containerd[1597]: time="2025-09-12T17:40:15.447513635Z" level=info msg="Ensure that sandbox cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593 in task-service has been cleanup successfully" Sep 12 17:40:15.455556 kubelet[2717]: I0912 17:40:15.454665 2717 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:15.459440 containerd[1597]: time="2025-09-12T17:40:15.457365432Z" level=info msg="StopPodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\"" Sep 12 17:40:15.459440 containerd[1597]: time="2025-09-12T17:40:15.457664013Z" level=info msg="Ensure that sandbox 3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978 in task-service has been cleanup successfully" Sep 12 17:40:15.589939 containerd[1597]: time="2025-09-12T17:40:15.589872526Z" level=error msg="StopPodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" failed" error="failed to destroy network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.590232 containerd[1597]: time="2025-09-12T17:40:15.590207395Z" level=error msg="StopPodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" failed" error="failed to destroy network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.590534 kubelet[2717]: E0912 17:40:15.590476 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:15.591055 kubelet[2717]: E0912 17:40:15.590481 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:15.591055 kubelet[2717]: E0912 17:40:15.590774 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be"} Sep 12 17:40:15.591055 kubelet[2717]: E0912 17:40:15.590937 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d82ad19b-3db7-4b09-ad0d-31652f615ba5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.591055 kubelet[2717]: E0912 17:40:15.590823 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102"} Sep 12 17:40:15.591055 kubelet[2717]: E0912 17:40:15.590979 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d82ad19b-3db7-4b09-ad0d-31652f615ba5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mpsh9" podUID="d82ad19b-3db7-4b09-ad0d-31652f615ba5" Sep 12 17:40:15.591416 kubelet[2717]: E0912 17:40:15.591015 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"beb36fc2-c828-43e2-90d6-9cffbe7e8f94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.591416 kubelet[2717]: E0912 17:40:15.591053 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"beb36fc2-c828-43e2-90d6-9cffbe7e8f94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vsvr8" podUID="beb36fc2-c828-43e2-90d6-9cffbe7e8f94" Sep 12 17:40:15.615821 containerd[1597]: time="2025-09-12T17:40:15.615490048Z" level=error msg="StopPodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" failed" error="failed to destroy network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.617219 kubelet[2717]: E0912 17:40:15.617014 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:15.617219 kubelet[2717]: E0912 17:40:15.617071 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309"} Sep 12 17:40:15.617219 kubelet[2717]: E0912 17:40:15.617106 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"245df9e2-1f49-4d73-9c68-1b8c92656c80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.617219 kubelet[2717]: E0912 17:40:15.617131 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"245df9e2-1f49-4d73-9c68-1b8c92656c80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674688f4df-xgdkq" podUID="245df9e2-1f49-4d73-9c68-1b8c92656c80" Sep 12 17:40:15.641335 containerd[1597]: time="2025-09-12T17:40:15.640105174Z" level=error msg="StopPodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" failed" error="failed to destroy network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.641487 kubelet[2717]: E0912 17:40:15.640382 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:15.641487 kubelet[2717]: E0912 17:40:15.640439 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3"} Sep 12 17:40:15.641487 kubelet[2717]: E0912 17:40:15.640496 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d624f012-d50d-4bd5-9261-9b5b725646ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.641487 kubelet[2717]: E0912 17:40:15.640558 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d624f012-d50d-4bd5-9261-9b5b725646ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-8q4cb" podUID="d624f012-d50d-4bd5-9261-9b5b725646ef" Sep 12 17:40:15.656360 containerd[1597]: time="2025-09-12T17:40:15.656034975Z" level=error msg="StopPodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" failed" error="failed to destroy network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.657441 containerd[1597]: time="2025-09-12T17:40:15.656634213Z" level=error msg="StopPodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" failed" error="failed to destroy network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.659025 kubelet[2717]: E0912 17:40:15.656340 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:15.659025 kubelet[2717]: E0912 17:40:15.656495 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978"} Sep 12 17:40:15.659025 kubelet[2717]: E0912 17:40:15.656773 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:15.659025 kubelet[2717]: E0912 17:40:15.656801 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41"} Sep 12 17:40:15.659025 kubelet[2717]: E0912 17:40:15.656825 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95231497-8828-48f6-9eda-7b0dd9295eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.659370 kubelet[2717]: E0912 17:40:15.656884 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95231497-8828-48f6-9eda-7b0dd9295eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n244d" podUID="95231497-8828-48f6-9eda-7b0dd9295eb8" Sep 12 17:40:15.659370 kubelet[2717]: E0912 17:40:15.656985 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b1dbfd9-0d2b-4616-9684-f70423a56727\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.659370 kubelet[2717]: E0912 17:40:15.657008 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b1dbfd9-0d2b-4616-9684-f70423a56727\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" podUID="4b1dbfd9-0d2b-4616-9684-f70423a56727" Sep 12 17:40:15.667484 containerd[1597]: time="2025-09-12T17:40:15.667228755Z" level=error msg="StopPodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" failed" error="failed to destroy network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.668340 kubelet[2717]: E0912 17:40:15.668098 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:15.668340 kubelet[2717]: E0912 17:40:15.668181 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567"} Sep 12 17:40:15.668340 kubelet[2717]: E0912 17:40:15.668220 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bb2647e-616b-4a7b-a0a3-710344efe361\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.668340 kubelet[2717]: E0912 17:40:15.668246 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bb2647e-616b-4a7b-a0a3-710344efe361\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" podUID="6bb2647e-616b-4a7b-a0a3-710344efe361" Sep 12 17:40:15.668703 containerd[1597]: time="2025-09-12T17:40:15.668276055Z" level=error msg="StopPodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" failed" error="failed to destroy network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:15.669053 kubelet[2717]: E0912 17:40:15.668992 2717 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:15.669053 kubelet[2717]: E0912 17:40:15.669049 2717 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593"} Sep 12 17:40:15.669167 kubelet[2717]: E0912 17:40:15.669074 2717 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a573f69-5286-4c75-b8cd-d7019d8e8a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:15.669167 kubelet[2717]: E0912 17:40:15.669128 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a573f69-5286-4c75-b8cd-d7019d8e8a47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" podUID="8a573f69-5286-4c75-b8cd-d7019d8e8a47" Sep 12 17:40:21.673162 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:21.670104 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:21.670204 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:23.426548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272244981.mount: Deactivated successfully. Sep 12 17:40:23.586043 containerd[1597]: time="2025-09-12T17:40:23.576679700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.210846092s" Sep 12 17:40:23.586043 containerd[1597]: time="2025-09-12T17:40:23.585769653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:40:23.589799 containerd[1597]: time="2025-09-12T17:40:23.585902146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:40:23.589799 containerd[1597]: time="2025-09-12T17:40:23.589212053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:23.643435 containerd[1597]: time="2025-09-12T17:40:23.643230544Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:23.644423 containerd[1597]: time="2025-09-12T17:40:23.644360076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:23.694878 containerd[1597]: time="2025-09-12T17:40:23.694579964Z" level=info msg="CreateContainer within sandbox \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:40:23.717952 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:23.717548 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:23.717636 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:23.803780 containerd[1597]: time="2025-09-12T17:40:23.803693703Z" level=info msg="CreateContainer within sandbox \"3490e0f294965c005ddacca2eafca26d33054b5a8016f6f6df0344b4cc97044c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9bc96224e3ae2f4742afc4c4dd8fa0a4cdd6561472ff9ddc61acb1c93c979f34\"" Sep 12 17:40:23.806532 containerd[1597]: time="2025-09-12T17:40:23.806465342Z" level=info msg="StartContainer for \"9bc96224e3ae2f4742afc4c4dd8fa0a4cdd6561472ff9ddc61acb1c93c979f34\"" Sep 12 17:40:24.068482 containerd[1597]: time="2025-09-12T17:40:24.068091540Z" level=info msg="StartContainer for \"9bc96224e3ae2f4742afc4c4dd8fa0a4cdd6561472ff9ddc61acb1c93c979f34\" returns successfully" Sep 12 17:40:24.201905 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:40:24.203307 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:40:24.460106 containerd[1597]: time="2025-09-12T17:40:24.458202420Z" level=info msg="StopPodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\"" Sep 12 17:40:24.677303 kubelet[2717]: I0912 17:40:24.668485 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dwzbx" podStartSLOduration=2.557727933 podStartE2EDuration="24.59856963s" podCreationTimestamp="2025-09-12 17:40:00 +0000 UTC" firstStartedPulling="2025-09-12 17:40:01.602810501 +0000 UTC m=+20.730706185" lastFinishedPulling="2025-09-12 17:40:23.643652198 +0000 UTC m=+42.771547882" observedRunningTime="2025-09-12 17:40:24.589464534 +0000 UTC m=+43.717360214" watchObservedRunningTime="2025-09-12 17:40:24.59856963 +0000 UTC m=+43.726465313" Sep 12 17:40:24.768095 systemd[1]: run-containerd-runc-k8s.io-9bc96224e3ae2f4742afc4c4dd8fa0a4cdd6561472ff9ddc61acb1c93c979f34-runc.SLteFi.mount: Deactivated successfully. Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.807 [INFO][3959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.810 [INFO][3959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" iface="eth0" netns="/var/run/netns/cni-7ac51ff0-e375-8a83-7598-03f30789a2c3" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.811 [INFO][3959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" iface="eth0" netns="/var/run/netns/cni-7ac51ff0-e375-8a83-7598-03f30789a2c3" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.811 [INFO][3959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" iface="eth0" netns="/var/run/netns/cni-7ac51ff0-e375-8a83-7598-03f30789a2c3" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.811 [INFO][3959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.811 [INFO][3959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.993 [INFO][3984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.995 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:24.996 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:25.014 [WARNING][3984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:25.014 [INFO][3984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:25.017 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:25.025393 containerd[1597]: 2025-09-12 17:40:25.019 [INFO][3959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:25.025393 containerd[1597]: time="2025-09-12T17:40:25.025166310Z" level=info msg="TearDown network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" successfully" Sep 12 17:40:25.025393 containerd[1597]: time="2025-09-12T17:40:25.025219654Z" level=info msg="StopPodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" returns successfully" Sep 12 17:40:25.034421 systemd[1]: run-netns-cni\x2d7ac51ff0\x2de375\x2d8a83\x2d7598\x2d03f30789a2c3.mount: Deactivated successfully. Sep 12 17:40:25.210110 kubelet[2717]: I0912 17:40:25.209491 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-ca-bundle\") pod \"245df9e2-1f49-4d73-9c68-1b8c92656c80\" (UID: \"245df9e2-1f49-4d73-9c68-1b8c92656c80\") " Sep 12 17:40:25.210110 kubelet[2717]: I0912 17:40:25.209713 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-backend-key-pair\") pod \"245df9e2-1f49-4d73-9c68-1b8c92656c80\" (UID: \"245df9e2-1f49-4d73-9c68-1b8c92656c80\") " Sep 12 17:40:25.210110 kubelet[2717]: I0912 17:40:25.209749 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr7hk\" (UniqueName: \"kubernetes.io/projected/245df9e2-1f49-4d73-9c68-1b8c92656c80-kube-api-access-mr7hk\") pod \"245df9e2-1f49-4d73-9c68-1b8c92656c80\" (UID: \"245df9e2-1f49-4d73-9c68-1b8c92656c80\") " Sep 12 17:40:25.238461 systemd[1]: var-lib-kubelet-pods-245df9e2\x2d1f49\x2d4d73\x2d9c68\x2d1b8c92656c80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmr7hk.mount: Deactivated successfully. Sep 12 17:40:25.241413 kubelet[2717]: I0912 17:40:25.238212 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "245df9e2-1f49-4d73-9c68-1b8c92656c80" (UID: "245df9e2-1f49-4d73-9c68-1b8c92656c80"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:40:25.241413 kubelet[2717]: I0912 17:40:25.236923 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245df9e2-1f49-4d73-9c68-1b8c92656c80-kube-api-access-mr7hk" (OuterVolumeSpecName: "kube-api-access-mr7hk") pod "245df9e2-1f49-4d73-9c68-1b8c92656c80" (UID: "245df9e2-1f49-4d73-9c68-1b8c92656c80"). InnerVolumeSpecName "kube-api-access-mr7hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:40:25.241413 kubelet[2717]: I0912 17:40:25.241346 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "245df9e2-1f49-4d73-9c68-1b8c92656c80" (UID: "245df9e2-1f49-4d73-9c68-1b8c92656c80"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:40:25.310945 kubelet[2717]: I0912 17:40:25.310418 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-ca-bundle\") on node \"ci-4081.3.6-9-2d91ca838a\" DevicePath \"\"" Sep 12 17:40:25.310945 kubelet[2717]: I0912 17:40:25.310470 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/245df9e2-1f49-4d73-9c68-1b8c92656c80-whisker-backend-key-pair\") on node \"ci-4081.3.6-9-2d91ca838a\" DevicePath \"\"" Sep 12 17:40:25.310945 kubelet[2717]: I0912 17:40:25.310482 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr7hk\" (UniqueName: \"kubernetes.io/projected/245df9e2-1f49-4d73-9c68-1b8c92656c80-kube-api-access-mr7hk\") on node \"ci-4081.3.6-9-2d91ca838a\" DevicePath \"\"" Sep 12 17:40:25.430792 systemd[1]: var-lib-kubelet-pods-245df9e2\x2d1f49\x2d4d73\x2d9c68\x2d1b8c92656c80-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:40:25.737126 kubelet[2717]: I0912 17:40:25.736656 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2ae9550-ab0d-42fa-8de4-a0becb82dcc1-whisker-ca-bundle\") pod \"whisker-59c457cc56-7stv7\" (UID: \"a2ae9550-ab0d-42fa-8de4-a0becb82dcc1\") " pod="calico-system/whisker-59c457cc56-7stv7" Sep 12 17:40:25.737126 kubelet[2717]: I0912 17:40:25.736734 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz8h\" (UniqueName: \"kubernetes.io/projected/a2ae9550-ab0d-42fa-8de4-a0becb82dcc1-kube-api-access-9pz8h\") pod \"whisker-59c457cc56-7stv7\" (UID: \"a2ae9550-ab0d-42fa-8de4-a0becb82dcc1\") " pod="calico-system/whisker-59c457cc56-7stv7" Sep 12 17:40:25.737126 kubelet[2717]: I0912 17:40:25.736769 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2ae9550-ab0d-42fa-8de4-a0becb82dcc1-whisker-backend-key-pair\") pod \"whisker-59c457cc56-7stv7\" (UID: \"a2ae9550-ab0d-42fa-8de4-a0becb82dcc1\") " pod="calico-system/whisker-59c457cc56-7stv7" Sep 12 17:40:25.766385 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:25.766403 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:25.766918 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:25.958787 containerd[1597]: time="2025-09-12T17:40:25.958735081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c457cc56-7stv7,Uid:a2ae9550-ab0d-42fa-8de4-a0becb82dcc1,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:26.128763 containerd[1597]: time="2025-09-12T17:40:26.128702522Z" level=info msg="StopPodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\"" Sep 12 17:40:26.130296 containerd[1597]: time="2025-09-12T17:40:26.129947203Z" level=info msg="StopPodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\"" Sep 12 17:40:26.261530 systemd-networkd[1220]: cali5ff67c32159: Link UP Sep 12 17:40:26.261987 systemd-networkd[1220]: cali5ff67c32159: Gained carrier Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.013 [INFO][4033] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.031 [INFO][4033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0 whisker-59c457cc56- calico-system a2ae9550-ab0d-42fa-8de4-a0becb82dcc1 916 0 2025-09-12 17:40:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59c457cc56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a whisker-59c457cc56-7stv7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5ff67c32159 [] [] }} ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.031 [INFO][4033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.074 [INFO][4045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" HandleID="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.075 [INFO][4045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" HandleID="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"whisker-59c457cc56-7stv7", "timestamp":"2025-09-12 17:40:26.07481731 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.075 [INFO][4045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.075 [INFO][4045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.075 [INFO][4045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.086 [INFO][4045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.099 [INFO][4045] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.107 [INFO][4045] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.111 [INFO][4045] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.116 [INFO][4045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.116 [INFO][4045] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.120 [INFO][4045] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.130 [INFO][4045] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.150 [INFO][4045] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.65/26] block=192.168.82.64/26 handle="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.150 [INFO][4045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.65/26] handle="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.150 [INFO][4045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:26.318110 containerd[1597]: 2025-09-12 17:40:26.150 [INFO][4045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.65/26] IPv6=[] ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" HandleID="k8s-pod-network.f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.325292 containerd[1597]: 2025-09-12 17:40:26.176 [INFO][4033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0", GenerateName:"whisker-59c457cc56-", Namespace:"calico-system", SelfLink:"", UID:"a2ae9550-ab0d-42fa-8de4-a0becb82dcc1", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59c457cc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"whisker-59c457cc56-7stv7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ff67c32159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:26.325292 containerd[1597]: 2025-09-12 17:40:26.176 [INFO][4033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.65/32] ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.325292 containerd[1597]: 2025-09-12 17:40:26.176 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ff67c32159 ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.325292 containerd[1597]: 2025-09-12 17:40:26.263 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.325292 containerd[1597]: 2025-09-12 17:40:26.264 [INFO][4033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0", GenerateName:"whisker-59c457cc56-", Namespace:"calico-system", SelfLink:"", UID:"a2ae9550-ab0d-42fa-8de4-a0becb82dcc1", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59c457cc56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b", Pod:"whisker-59c457cc56-7stv7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5ff67c32159", MAC:"fe:94:bb:10:c6:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:26.325292 containerd[1597]: 2025-09-12 17:40:26.308 [INFO][4033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b" Namespace="calico-system" Pod="whisker-59c457cc56-7stv7" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--59c457cc56--7stv7-eth0" Sep 12 17:40:26.518046 containerd[1597]: time="2025-09-12T17:40:26.511496092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:26.518046 containerd[1597]: time="2025-09-12T17:40:26.514005307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:26.518046 containerd[1597]: time="2025-09-12T17:40:26.514053447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:26.522238 containerd[1597]: time="2025-09-12T17:40:26.521020580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.473 [INFO][4086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.475 [INFO][4086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" iface="eth0" netns="/var/run/netns/cni-24db66ab-10e3-5af8-854a-5dc08a602543" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.477 [INFO][4086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" iface="eth0" netns="/var/run/netns/cni-24db66ab-10e3-5af8-854a-5dc08a602543" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.479 [INFO][4086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" iface="eth0" netns="/var/run/netns/cni-24db66ab-10e3-5af8-854a-5dc08a602543" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.479 [INFO][4086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.481 [INFO][4086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.823 [INFO][4191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.824 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.824 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.866 [WARNING][4191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.866 [INFO][4191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.882 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:26.918590 containerd[1597]: 2025-09-12 17:40:26.906 [INFO][4086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:26.933215 containerd[1597]: time="2025-09-12T17:40:26.932351459Z" level=info msg="TearDown network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" successfully" Sep 12 17:40:26.933215 containerd[1597]: time="2025-09-12T17:40:26.932439119Z" level=info msg="StopPodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" returns successfully" Sep 12 17:40:26.938319 systemd[1]: run-netns-cni\x2d24db66ab\x2d10e3\x2d5af8\x2d854a\x2d5dc08a602543.mount: Deactivated successfully. Sep 12 17:40:26.944653 containerd[1597]: time="2025-09-12T17:40:26.938827204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-zwwnf,Uid:8a573f69-5286-4c75-b8cd-d7019d8e8a47,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.406 [INFO][4081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.427 [INFO][4081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" iface="eth0" netns="/var/run/netns/cni-2645c7ad-e298-48bd-5eb0-c94732997135" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.432 [INFO][4081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" iface="eth0" netns="/var/run/netns/cni-2645c7ad-e298-48bd-5eb0-c94732997135" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.436 [INFO][4081] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" iface="eth0" netns="/var/run/netns/cni-2645c7ad-e298-48bd-5eb0-c94732997135" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.437 [INFO][4081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.437 [INFO][4081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.895 [INFO][4158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.903 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.904 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.948 [WARNING][4158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.948 [INFO][4158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.956 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.007564 containerd[1597]: 2025-09-12 17:40:26.967 [INFO][4081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:27.007564 containerd[1597]: time="2025-09-12T17:40:27.002339759Z" level=info msg="TearDown network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" successfully" Sep 12 17:40:27.007564 containerd[1597]: time="2025-09-12T17:40:27.002383967Z" level=info msg="StopPodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" returns successfully" Sep 12 17:40:27.021899 kubelet[2717]: E0912 17:40:27.011969 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:27.024485 containerd[1597]: time="2025-09-12T17:40:27.012983402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vsvr8,Uid:beb36fc2-c828-43e2-90d6-9cffbe7e8f94,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:27.132323 containerd[1597]: time="2025-09-12T17:40:27.132171519Z" level=info msg="StopPodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\"" Sep 12 17:40:27.155247 containerd[1597]: time="2025-09-12T17:40:27.154214704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c457cc56-7stv7,Uid:a2ae9550-ab0d-42fa-8de4-a0becb82dcc1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b\"" Sep 12 17:40:27.158556 kubelet[2717]: I0912 17:40:27.158040 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245df9e2-1f49-4d73-9c68-1b8c92656c80" path="/var/lib/kubelet/pods/245df9e2-1f49-4d73-9c68-1b8c92656c80/volumes" Sep 12 17:40:27.177159 containerd[1597]: time="2025-09-12T17:40:27.176980920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:40:27.449312 systemd[1]: run-netns-cni\x2d2645c7ad\x2de298\x2d48bd\x2d5eb0\x2dc94732997135.mount: Deactivated successfully. Sep 12 17:40:27.518980 systemd-networkd[1220]: cali35c3657e946: Link UP Sep 12 17:40:27.522029 systemd-networkd[1220]: cali35c3657e946: Gained carrier Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.227 [INFO][4253] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.254 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0 calico-apiserver-5bd46c69b9- calico-apiserver 8a573f69-5286-4c75-b8cd-d7019d8e8a47 926 0 2025-09-12 17:39:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bd46c69b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a calico-apiserver-5bd46c69b9-zwwnf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35c3657e946 [] [] }} ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.254 [INFO][4253] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.379 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" HandleID="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.379 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" HandleID="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d12c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"calico-apiserver-5bd46c69b9-zwwnf", "timestamp":"2025-09-12 17:40:27.379305615 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.379 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.379 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.379 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.402 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.423 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.457 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.465 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.470 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.470 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.475 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9 Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.485 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.497 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.66/26] block=192.168.82.64/26 handle="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.497 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.66/26] handle="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.498 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.564389 containerd[1597]: 2025-09-12 17:40:27.498 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.66/26] IPv6=[] ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" HandleID="k8s-pod-network.6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.570331 containerd[1597]: 2025-09-12 17:40:27.505 [INFO][4253] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a573f69-5286-4c75-b8cd-d7019d8e8a47", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"calico-apiserver-5bd46c69b9-zwwnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c3657e946", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.570331 containerd[1597]: 2025-09-12 17:40:27.505 [INFO][4253] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.66/32] ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.570331 containerd[1597]: 2025-09-12 17:40:27.506 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35c3657e946 ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.570331 containerd[1597]: 2025-09-12 17:40:27.524 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.570331 containerd[1597]: 2025-09-12 17:40:27.525 [INFO][4253] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a573f69-5286-4c75-b8cd-d7019d8e8a47", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9", Pod:"calico-apiserver-5bd46c69b9-zwwnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c3657e946", MAC:"42:47:75:37:92:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.570331 containerd[1597]: 2025-09-12 17:40:27.550 [INFO][4253] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-zwwnf" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:27.624591 systemd-networkd[1220]: cali26ec317096b: Link UP Sep 12 17:40:27.629159 systemd-networkd[1220]: cali26ec317096b: Gained carrier Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.417 [INFO][4281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.419 [INFO][4281] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" iface="eth0" netns="/var/run/netns/cni-57805837-cbce-dca6-ced3-ee499283b28e" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.419 [INFO][4281] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" iface="eth0" netns="/var/run/netns/cni-57805837-cbce-dca6-ced3-ee499283b28e" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.419 [INFO][4281] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" iface="eth0" netns="/var/run/netns/cni-57805837-cbce-dca6-ced3-ee499283b28e" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.420 [INFO][4281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.420 [INFO][4281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.527 [INFO][4309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.527 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.618 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.634 [WARNING][4309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.634 [INFO][4309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.641 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.648019 containerd[1597]: 2025-09-12 17:40:27.646 [INFO][4281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:27.649165 containerd[1597]: time="2025-09-12T17:40:27.648068817Z" level=info msg="TearDown network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" successfully" Sep 12 17:40:27.649165 containerd[1597]: time="2025-09-12T17:40:27.648112738Z" level=info msg="StopPodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" returns successfully" Sep 12 17:40:27.654431 containerd[1597]: time="2025-09-12T17:40:27.652370465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n244d,Uid:95231497-8828-48f6-9eda-7b0dd9295eb8,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:27.657281 systemd[1]: run-netns-cni\x2d57805837\x2dcbce\x2ddca6\x2dced3\x2dee499283b28e.mount: Deactivated successfully. Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.284 [INFO][4243] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.321 [INFO][4243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0 coredns-7c65d6cfc9- kube-system beb36fc2-c828-43e2-90d6-9cffbe7e8f94 925 0 2025-09-12 17:39:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a coredns-7c65d6cfc9-vsvr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26ec317096b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.322 [INFO][4243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.473 [INFO][4297] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" HandleID="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.474 [INFO][4297] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" HandleID="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001239e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"coredns-7c65d6cfc9-vsvr8", "timestamp":"2025-09-12 17:40:27.473051868 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.474 [INFO][4297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.498 [INFO][4297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.499 [INFO][4297] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.514 [INFO][4297] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.535 [INFO][4297] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.551 [INFO][4297] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.561 [INFO][4297] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.571 [INFO][4297] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.572 [INFO][4297] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.576 [INFO][4297] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.589 [INFO][4297] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.617 [INFO][4297] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.67/26] block=192.168.82.64/26 handle="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.617 [INFO][4297] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.67/26] handle="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.617 [INFO][4297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.670249 containerd[1597]: 2025-09-12 17:40:27.617 [INFO][4297] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.67/26] IPv6=[] ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" HandleID="k8s-pod-network.53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.678144 containerd[1597]: 2025-09-12 17:40:27.620 [INFO][4243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"beb36fc2-c828-43e2-90d6-9cffbe7e8f94", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"coredns-7c65d6cfc9-vsvr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26ec317096b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.678144 containerd[1597]: 2025-09-12 17:40:27.621 [INFO][4243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.67/32] ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.678144 containerd[1597]: 2025-09-12 17:40:27.621 [INFO][4243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26ec317096b ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.678144 containerd[1597]: 2025-09-12 17:40:27.627 [INFO][4243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.678144 containerd[1597]: 2025-09-12 17:40:27.628 [INFO][4243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"beb36fc2-c828-43e2-90d6-9cffbe7e8f94", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c", Pod:"coredns-7c65d6cfc9-vsvr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26ec317096b", MAC:"16:56:4f:f4:05:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.678144 containerd[1597]: 2025-09-12 17:40:27.663 [INFO][4243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vsvr8" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:27.695327 systemd-networkd[1220]: cali5ff67c32159: Gained IPv6LL Sep 12 17:40:27.790635 containerd[1597]: time="2025-09-12T17:40:27.789418488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:27.791640 containerd[1597]: time="2025-09-12T17:40:27.790451027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:27.792442 containerd[1597]: time="2025-09-12T17:40:27.791401682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:27.805771 containerd[1597]: time="2025-09-12T17:40:27.802295579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:27.820461 containerd[1597]: time="2025-09-12T17:40:27.817769109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:27.820461 containerd[1597]: time="2025-09-12T17:40:27.817865299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:27.820461 containerd[1597]: time="2025-09-12T17:40:27.817883959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:27.820461 containerd[1597]: time="2025-09-12T17:40:27.818018481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:28.182717 containerd[1597]: time="2025-09-12T17:40:28.181880008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vsvr8,Uid:beb36fc2-c828-43e2-90d6-9cffbe7e8f94,Namespace:kube-system,Attempt:1,} returns sandbox id \"53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c\"" Sep 12 17:40:28.187127 kubelet[2717]: E0912 17:40:28.187080 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:28.200173 containerd[1597]: time="2025-09-12T17:40:28.200122274Z" level=info msg="CreateContainer within sandbox \"53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:28.277291 containerd[1597]: time="2025-09-12T17:40:28.277234776Z" level=info msg="CreateContainer within sandbox \"53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f551cd40ae2215b36721da6ffa9282e30d57aaafdda42e8dc9e5d4b2c76a7d3e\"" Sep 12 17:40:28.278890 containerd[1597]: time="2025-09-12T17:40:28.278230504Z" level=info msg="StartContainer for \"f551cd40ae2215b36721da6ffa9282e30d57aaafdda42e8dc9e5d4b2c76a7d3e\"" Sep 12 17:40:28.390088 containerd[1597]: time="2025-09-12T17:40:28.390023417Z" level=info msg="StartContainer for \"f551cd40ae2215b36721da6ffa9282e30d57aaafdda42e8dc9e5d4b2c76a7d3e\" returns successfully" Sep 12 17:40:28.469419 systemd-networkd[1220]: calie2091b9cac6: Link UP Sep 12 17:40:28.473352 systemd-networkd[1220]: calie2091b9cac6: Gained carrier Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:27.812 [INFO][4342] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:27.887 [INFO][4342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0 csi-node-driver- calico-system 95231497-8828-48f6-9eda-7b0dd9295eb8 932 0 2025-09-12 17:40:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a csi-node-driver-n244d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2091b9cac6 [] [] }} ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:27.887 [INFO][4342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.173 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" HandleID="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.179 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" HandleID="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029f370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"csi-node-driver-n244d", "timestamp":"2025-09-12 17:40:28.17328889 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.180 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.185 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.186 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.217 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.288 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.308 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.317 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.328 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.328 [INFO][4394] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.347 [INFO][4394] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25 Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.377 [INFO][4394] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.456 [INFO][4394] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.68/26] block=192.168.82.64/26 handle="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.456 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.68/26] handle="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.457 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:28.553621 containerd[1597]: 2025-09-12 17:40:28.457 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.68/26] IPv6=[] ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" HandleID="k8s-pod-network.921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.557096 containerd[1597]: 2025-09-12 17:40:28.461 [INFO][4342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95231497-8828-48f6-9eda-7b0dd9295eb8", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"csi-node-driver-n244d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2091b9cac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.557096 containerd[1597]: 2025-09-12 17:40:28.462 [INFO][4342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.68/32] ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.557096 containerd[1597]: 2025-09-12 17:40:28.462 [INFO][4342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2091b9cac6 ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.557096 containerd[1597]: 2025-09-12 17:40:28.480 [INFO][4342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.557096 containerd[1597]: 2025-09-12 17:40:28.482 [INFO][4342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95231497-8828-48f6-9eda-7b0dd9295eb8", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25", Pod:"csi-node-driver-n244d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2091b9cac6", MAC:"6e:9b:93:40:95:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.557096 containerd[1597]: 2025-09-12 17:40:28.542 [INFO][4342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25" Namespace="calico-system" Pod="csi-node-driver-n244d" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:28.621318 kubelet[2717]: E0912 17:40:28.621261 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:28.633639 containerd[1597]: time="2025-09-12T17:40:28.632829043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:28.633639 containerd[1597]: time="2025-09-12T17:40:28.632921349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:28.633639 containerd[1597]: time="2025-09-12T17:40:28.632934596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:28.633639 containerd[1597]: time="2025-09-12T17:40:28.633048032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:28.692923 kubelet[2717]: I0912 17:40:28.691726 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vsvr8" podStartSLOduration=42.691696325 podStartE2EDuration="42.691696325s" podCreationTimestamp="2025-09-12 17:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:28.685031364 +0000 UTC m=+47.812927058" watchObservedRunningTime="2025-09-12 17:40:28.691696325 +0000 UTC m=+47.819592004" Sep 12 17:40:28.831686 containerd[1597]: time="2025-09-12T17:40:28.831624910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-zwwnf,Uid:8a573f69-5286-4c75-b8cd-d7019d8e8a47,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9\"" Sep 12 17:40:28.884379 containerd[1597]: time="2025-09-12T17:40:28.884319825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n244d,Uid:95231497-8828-48f6-9eda-7b0dd9295eb8,Namespace:calico-system,Attempt:1,} returns sandbox id \"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25\"" Sep 12 17:40:28.975671 kernel: bpftool[4562]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:40:29.130998 containerd[1597]: time="2025-09-12T17:40:29.128469320Z" level=info msg="StopPodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\"" Sep 12 17:40:29.130998 containerd[1597]: time="2025-09-12T17:40:29.130320450Z" level=info msg="StopPodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\"" Sep 12 17:40:29.456716 containerd[1597]: time="2025-09-12T17:40:29.454004634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:29.457465 containerd[1597]: time="2025-09-12T17:40:29.457014090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:40:29.466250 containerd[1597]: time="2025-09-12T17:40:29.462605915Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:29.477743 systemd-networkd[1220]: cali35c3657e946: Gained IPv6LL Sep 12 17:40:29.479138 systemd-networkd[1220]: cali26ec317096b: Gained IPv6LL Sep 12 17:40:29.487790 containerd[1597]: time="2025-09-12T17:40:29.487714253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:29.488744 containerd[1597]: time="2025-09-12T17:40:29.488696761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.31165499s" Sep 12 17:40:29.488824 containerd[1597]: time="2025-09-12T17:40:29.488755964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:40:29.505460 containerd[1597]: time="2025-09-12T17:40:29.504903833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:29.507484 containerd[1597]: time="2025-09-12T17:40:29.507444708Z" level=info msg="CreateContainer within sandbox \"f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:40:29.566937 containerd[1597]: time="2025-09-12T17:40:29.566877564Z" level=info msg="CreateContainer within sandbox \"f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4eb743c7eb1a019310e413b84fac934cb774da4515707d9c0fd74948735a9f60\"" Sep 12 17:40:29.576623 containerd[1597]: time="2025-09-12T17:40:29.576270532Z" level=info msg="StartContainer for \"4eb743c7eb1a019310e413b84fac934cb774da4515707d9c0fd74948735a9f60\"" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.442 [INFO][4582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.442 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" iface="eth0" netns="/var/run/netns/cni-75ce1d00-baad-9c47-3d23-9fb49de5dea5" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.444 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" iface="eth0" netns="/var/run/netns/cni-75ce1d00-baad-9c47-3d23-9fb49de5dea5" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.445 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" iface="eth0" netns="/var/run/netns/cni-75ce1d00-baad-9c47-3d23-9fb49de5dea5" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.445 [INFO][4582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.445 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.598 [INFO][4595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.602 [INFO][4595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.602 [INFO][4595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.644 [WARNING][4595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.644 [INFO][4595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.660 [INFO][4595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.704320 containerd[1597]: 2025-09-12 17:40:29.683 [INFO][4582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:29.712121 containerd[1597]: time="2025-09-12T17:40:29.711981920Z" level=info msg="TearDown network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" successfully" Sep 12 17:40:29.716295 containerd[1597]: time="2025-09-12T17:40:29.714964138Z" level=info msg="StopPodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" returns successfully" Sep 12 17:40:29.719939 containerd[1597]: time="2025-09-12T17:40:29.719800419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795d66b4-bkptg,Uid:4b1dbfd9-0d2b-4616-9684-f70423a56727,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:29.720298 systemd[1]: run-netns-cni\x2d75ce1d00\x2dbaad\x2d9c47\x2d3d23\x2d9fb49de5dea5.mount: Deactivated successfully. Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.495 [INFO][4583] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.495 [INFO][4583] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" iface="eth0" netns="/var/run/netns/cni-8906e235-0cb5-4f09-571b-e71191b4ec71" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.496 [INFO][4583] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" iface="eth0" netns="/var/run/netns/cni-8906e235-0cb5-4f09-571b-e71191b4ec71" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.498 [INFO][4583] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" iface="eth0" netns="/var/run/netns/cni-8906e235-0cb5-4f09-571b-e71191b4ec71" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.501 [INFO][4583] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.502 [INFO][4583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.693 [INFO][4601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.694 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.694 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.702 [WARNING][4601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.703 [INFO][4601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.706 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.737186 containerd[1597]: 2025-09-12 17:40:29.708 [INFO][4583] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:29.740294 containerd[1597]: time="2025-09-12T17:40:29.738621543Z" level=info msg="TearDown network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" successfully" Sep 12 17:40:29.740294 containerd[1597]: time="2025-09-12T17:40:29.738738896Z" level=info msg="StopPodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" returns successfully" Sep 12 17:40:29.740836 containerd[1597]: time="2025-09-12T17:40:29.740500603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-pfsld,Uid:6bb2647e-616b-4a7b-a0a3-710344efe361,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:29.751876 systemd[1]: run-netns-cni\x2d8906e235\x2d0cb5\x2d4f09\x2d571b\x2de71191b4ec71.mount: Deactivated successfully. Sep 12 17:40:29.767384 kubelet[2717]: E0912 17:40:29.764039 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:30.144636 containerd[1597]: time="2025-09-12T17:40:30.144103848Z" level=info msg="StopPodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\"" Sep 12 17:40:30.155967 containerd[1597]: time="2025-09-12T17:40:30.152779442Z" level=info msg="StopPodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\"" Sep 12 17:40:30.240140 systemd-networkd[1220]: vxlan.calico: Link UP Sep 12 17:40:30.240156 systemd-networkd[1220]: vxlan.calico: Gained carrier Sep 12 17:40:30.309181 systemd-networkd[1220]: calie2091b9cac6: Gained IPv6LL Sep 12 17:40:30.509638 containerd[1597]: time="2025-09-12T17:40:30.509497643Z" level=info msg="StartContainer for \"4eb743c7eb1a019310e413b84fac934cb774da4515707d9c0fd74948735a9f60\" returns successfully" Sep 12 17:40:30.640414 systemd-networkd[1220]: calibc2e088531d: Link UP Sep 12 17:40:30.649833 systemd-networkd[1220]: calibc2e088531d: Gained carrier Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:29.947 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0 calico-apiserver-5bd46c69b9- calico-apiserver 6bb2647e-616b-4a7b-a0a3-710344efe361 961 0 2025-09-12 17:39:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bd46c69b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a calico-apiserver-5bd46c69b9-pfsld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc2e088531d [] [] }} ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:29.948 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.219 [INFO][4674] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" HandleID="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.221 [INFO][4674] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" HandleID="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"calico-apiserver-5bd46c69b9-pfsld", "timestamp":"2025-09-12 17:40:30.21975165 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.221 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.221 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.229 [INFO][4674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.303 [INFO][4674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.352 [INFO][4674] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.484 [INFO][4674] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.492 [INFO][4674] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.513 [INFO][4674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.513 [INFO][4674] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.519 [INFO][4674] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9 Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.546 [INFO][4674] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.572 [INFO][4674] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.69/26] block=192.168.82.64/26 handle="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.572 [INFO][4674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.69/26] handle="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.573 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.688125 containerd[1597]: 2025-09-12 17:40:30.576 [INFO][4674] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.69/26] IPv6=[] ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" HandleID="k8s-pod-network.ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.692374 containerd[1597]: 2025-09-12 17:40:30.599 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bb2647e-616b-4a7b-a0a3-710344efe361", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"calico-apiserver-5bd46c69b9-pfsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc2e088531d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.692374 containerd[1597]: 2025-09-12 17:40:30.599 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.69/32] ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.692374 containerd[1597]: 2025-09-12 17:40:30.601 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc2e088531d ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.692374 containerd[1597]: 2025-09-12 17:40:30.660 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.692374 containerd[1597]: 2025-09-12 17:40:30.661 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bb2647e-616b-4a7b-a0a3-710344efe361", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9", Pod:"calico-apiserver-5bd46c69b9-pfsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc2e088531d", MAC:"e6:90:12:f2:7b:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.692374 containerd[1597]: 2025-09-12 17:40:30.678 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9" Namespace="calico-apiserver" Pod="calico-apiserver-5bd46c69b9-pfsld" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:30.744197 containerd[1597]: time="2025-09-12T17:40:30.743924982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:30.744605 containerd[1597]: time="2025-09-12T17:40:30.744425182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:30.744605 containerd[1597]: time="2025-09-12T17:40:30.744479285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:30.747862 containerd[1597]: time="2025-09-12T17:40:30.746408478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:30.767739 systemd-networkd[1220]: caliae9de97ccbd: Link UP Sep 12 17:40:30.784764 systemd-networkd[1220]: caliae9de97ccbd: Gained carrier Sep 12 17:40:30.790756 kubelet[2717]: E0912 17:40:30.788226 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.067 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0 calico-kube-controllers-7795d66b4- calico-system 4b1dbfd9-0d2b-4616-9684-f70423a56727 959 0 2025-09-12 17:40:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7795d66b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a calico-kube-controllers-7795d66b4-bkptg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae9de97ccbd [] [] }} ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.068 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.553 [INFO][4681] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" HandleID="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.572 [INFO][4681] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" HandleID="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383380), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"calico-kube-controllers-7795d66b4-bkptg", "timestamp":"2025-09-12 17:40:30.553748533 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.576 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.576 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.576 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.606 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.631 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.660 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.668 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.683 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.683 [INFO][4681] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.693 [INFO][4681] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22 Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.705 [INFO][4681] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.727 [INFO][4681] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.70/26] block=192.168.82.64/26 handle="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.727 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.70/26] handle="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.728 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.866660 containerd[1597]: 2025-09-12 17:40:30.728 [INFO][4681] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.70/26] IPv6=[] ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" HandleID="k8s-pod-network.b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.867777 containerd[1597]: 2025-09-12 17:40:30.740 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0", GenerateName:"calico-kube-controllers-7795d66b4-", Namespace:"calico-system", SelfLink:"", UID:"4b1dbfd9-0d2b-4616-9684-f70423a56727", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7795d66b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"calico-kube-controllers-7795d66b4-bkptg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae9de97ccbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.867777 containerd[1597]: 2025-09-12 17:40:30.740 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.70/32] ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.867777 containerd[1597]: 2025-09-12 17:40:30.740 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae9de97ccbd ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.867777 containerd[1597]: 2025-09-12 17:40:30.806 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.867777 containerd[1597]: 2025-09-12 17:40:30.821 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0", GenerateName:"calico-kube-controllers-7795d66b4-", Namespace:"calico-system", SelfLink:"", UID:"4b1dbfd9-0d2b-4616-9684-f70423a56727", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7795d66b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22", Pod:"calico-kube-controllers-7795d66b4-bkptg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae9de97ccbd", MAC:"02:13:77:ba:af:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.867777 containerd[1597]: 2025-09-12 17:40:30.848 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22" Namespace="calico-system" Pod="calico-kube-controllers-7795d66b4-bkptg" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.550 [INFO][4715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.554 [INFO][4715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" iface="eth0" netns="/var/run/netns/cni-6d709976-4a69-edd3-9d2a-f62d32efa981" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.555 [INFO][4715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" iface="eth0" netns="/var/run/netns/cni-6d709976-4a69-edd3-9d2a-f62d32efa981" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.564 [INFO][4715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" iface="eth0" netns="/var/run/netns/cni-6d709976-4a69-edd3-9d2a-f62d32efa981" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.565 [INFO][4715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.565 [INFO][4715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.752 [INFO][4751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.752 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.753 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.830 [WARNING][4751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.830 [INFO][4751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.839 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.880913 containerd[1597]: 2025-09-12 17:40:30.857 [INFO][4715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:30.884979 containerd[1597]: time="2025-09-12T17:40:30.882470940Z" level=info msg="TearDown network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" successfully" Sep 12 17:40:30.884979 containerd[1597]: time="2025-09-12T17:40:30.882527986Z" level=info msg="StopPodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" returns successfully" Sep 12 17:40:30.885201 kubelet[2717]: E0912 17:40:30.883048 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:30.889544 containerd[1597]: time="2025-09-12T17:40:30.888412695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mpsh9,Uid:d82ad19b-3db7-4b09-ad0d-31652f615ba5,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:30.891398 systemd[1]: run-netns-cni\x2d6d709976\x2d4a69\x2dedd3\x2d9d2a\x2df62d32efa981.mount: Deactivated successfully. Sep 12 17:40:31.001962 containerd[1597]: time="2025-09-12T17:40:30.993540740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:31.001962 containerd[1597]: time="2025-09-12T17:40:30.993634226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:31.001962 containerd[1597]: time="2025-09-12T17:40:30.993653482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:31.001962 containerd[1597]: time="2025-09-12T17:40:30.993778120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:30.729 [INFO][4718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:30.731 [INFO][4718] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" iface="eth0" netns="/var/run/netns/cni-7988c668-8abb-a977-bda7-3e8e16d9c32f" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:30.731 [INFO][4718] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" iface="eth0" netns="/var/run/netns/cni-7988c668-8abb-a977-bda7-3e8e16d9c32f" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:30.733 [INFO][4718] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" iface="eth0" netns="/var/run/netns/cni-7988c668-8abb-a977-bda7-3e8e16d9c32f" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:30.733 [INFO][4718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:30.733 [INFO][4718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.014 [INFO][4778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.014 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.014 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.030 [WARNING][4778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.031 [INFO][4778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.041 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.080206 containerd[1597]: 2025-09-12 17:40:31.060 [INFO][4718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:31.084612 containerd[1597]: time="2025-09-12T17:40:31.082330929Z" level=info msg="TearDown network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" successfully" Sep 12 17:40:31.084612 containerd[1597]: time="2025-09-12T17:40:31.082495522Z" level=info msg="StopPodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" returns successfully" Sep 12 17:40:31.086238 containerd[1597]: time="2025-09-12T17:40:31.085195619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8q4cb,Uid:d624f012-d50d-4bd5-9261-9b5b725646ef,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:31.325366 systemd[1]: Started sshd@9-159.223.204.96:22-147.75.109.163:42242.service - OpenSSH per-connection server daemon (147.75.109.163:42242). Sep 12 17:40:31.547709 systemd[1]: run-netns-cni\x2d7988c668\x2d8abb\x2da977\x2dbda7\x2d3e8e16d9c32f.mount: Deactivated successfully. Sep 12 17:40:31.617050 systemd-networkd[1220]: cali7d5b43b196c: Link UP Sep 12 17:40:31.617700 systemd-networkd[1220]: cali7d5b43b196c: Gained carrier Sep 12 17:40:31.670115 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:31.667163 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:31.667216 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:31.672124 sshd[4879]: Accepted publickey for core from 147.75.109.163 port 42242 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:31.677999 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:31.717373 systemd-logind[1576]: New session 10 of user core. Sep 12 17:40:31.720211 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.098 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0 coredns-7c65d6cfc9- kube-system d82ad19b-3db7-4b09-ad0d-31652f615ba5 976 0 2025-09-12 17:39:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a coredns-7c65d6cfc9-mpsh9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d5b43b196c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.099 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.319 [INFO][4864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" HandleID="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.323 [INFO][4864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" HandleID="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001234f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"coredns-7c65d6cfc9-mpsh9", "timestamp":"2025-09-12 17:40:31.319901351 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.326 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.326 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.326 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.402 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.461 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.479 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.485 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.498 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.498 [INFO][4864] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.504 [INFO][4864] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.525 [INFO][4864] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.553 [INFO][4864] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.71/26] block=192.168.82.64/26 handle="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.553 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.71/26] handle="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.554 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.734338 containerd[1597]: 2025-09-12 17:40:31.554 [INFO][4864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.71/26] IPv6=[] ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" HandleID="k8s-pod-network.9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.735632 containerd[1597]: 2025-09-12 17:40:31.579 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d82ad19b-3db7-4b09-ad0d-31652f615ba5", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"coredns-7c65d6cfc9-mpsh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d5b43b196c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:31.735632 containerd[1597]: 2025-09-12 17:40:31.579 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.71/32] ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.735632 containerd[1597]: 2025-09-12 17:40:31.579 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d5b43b196c ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.735632 containerd[1597]: 2025-09-12 17:40:31.626 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.735632 containerd[1597]: 2025-09-12 17:40:31.636 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d82ad19b-3db7-4b09-ad0d-31652f615ba5", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c", Pod:"coredns-7c65d6cfc9-mpsh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d5b43b196c", MAC:"96:db:77:ec:32:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:31.735632 containerd[1597]: 2025-09-12 17:40:31.656 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mpsh9" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:31.779929 containerd[1597]: time="2025-09-12T17:40:31.779747787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7795d66b4-bkptg,Uid:4b1dbfd9-0d2b-4616-9684-f70423a56727,Namespace:calico-system,Attempt:1,} returns sandbox id \"b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22\"" Sep 12 17:40:31.998404 containerd[1597]: time="2025-09-12T17:40:31.998324547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bd46c69b9-pfsld,Uid:6bb2647e-616b-4a7b-a0a3-710344efe361,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9\"" Sep 12 17:40:32.128982 systemd-networkd[1220]: cali140a05ca263: Link UP Sep 12 17:40:32.135918 systemd-networkd[1220]: cali140a05ca263: Gained carrier Sep 12 17:40:32.166743 systemd-networkd[1220]: vxlan.calico: Gained IPv6LL Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.448 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0 goldmane-7988f88666- calico-system d624f012-d50d-4bd5-9261-9b5b725646ef 982 0 2025-09-12 17:40:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-9-2d91ca838a goldmane-7988f88666-8q4cb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali140a05ca263 [] [] }} ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.451 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.747 [INFO][4884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" HandleID="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.752 [INFO][4884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" HandleID="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-9-2d91ca838a", "pod":"goldmane-7988f88666-8q4cb", "timestamp":"2025-09-12 17:40:31.746611253 +0000 UTC"}, Hostname:"ci-4081.3.6-9-2d91ca838a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.752 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.752 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.752 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-9-2d91ca838a' Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.792 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.837 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.869 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.881 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.896 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.64/26 host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.897 [INFO][4884] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.82.64/26 handle="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.902 [INFO][4884] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2 Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.928 [INFO][4884] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.82.64/26 handle="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.952 [INFO][4884] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.82.72/26] block=192.168.82.64/26 handle="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.968 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.72/26] handle="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" host="ci-4081.3.6-9-2d91ca838a" Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.968 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:32.204041 containerd[1597]: 2025-09-12 17:40:31.968 [INFO][4884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.82.72/26] IPv6=[] ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" HandleID="k8s-pod-network.a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.204745 containerd[1597]: 2025-09-12 17:40:32.021 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d624f012-d50d-4bd5-9261-9b5b725646ef", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"", Pod:"goldmane-7988f88666-8q4cb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali140a05ca263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:32.204745 containerd[1597]: 2025-09-12 17:40:32.021 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.72/32] ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.204745 containerd[1597]: 2025-09-12 17:40:32.021 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali140a05ca263 ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.204745 containerd[1597]: 2025-09-12 17:40:32.145 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.204745 containerd[1597]: 2025-09-12 17:40:32.146 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d624f012-d50d-4bd5-9261-9b5b725646ef", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2", Pod:"goldmane-7988f88666-8q4cb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali140a05ca263", MAC:"c2:78:bf:be:e6:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:32.204745 containerd[1597]: 2025-09-12 17:40:32.179 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2" Namespace="calico-system" Pod="goldmane-7988f88666-8q4cb" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:32.231983 systemd-networkd[1220]: calibc2e088531d: Gained IPv6LL Sep 12 17:40:32.270087 containerd[1597]: time="2025-09-12T17:40:32.259297957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:32.270087 containerd[1597]: time="2025-09-12T17:40:32.259386027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:32.270087 containerd[1597]: time="2025-09-12T17:40:32.259409992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:32.270087 containerd[1597]: time="2025-09-12T17:40:32.259552125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:32.531124 systemd[1]: run-containerd-runc-k8s.io-9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c-runc.2BlgBK.mount: Deactivated successfully. Sep 12 17:40:32.565309 containerd[1597]: time="2025-09-12T17:40:32.563936107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:32.565309 containerd[1597]: time="2025-09-12T17:40:32.564623966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:32.566713 containerd[1597]: time="2025-09-12T17:40:32.564687968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:32.567723 containerd[1597]: time="2025-09-12T17:40:32.567407152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:32.786361 containerd[1597]: time="2025-09-12T17:40:32.785104574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mpsh9,Uid:d82ad19b-3db7-4b09-ad0d-31652f615ba5,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c\"" Sep 12 17:40:32.788929 kubelet[2717]: E0912 17:40:32.788888 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:32.799468 containerd[1597]: time="2025-09-12T17:40:32.799091857Z" level=info msg="CreateContainer within sandbox \"9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:32.806799 systemd-networkd[1220]: caliae9de97ccbd: Gained IPv6LL Sep 12 17:40:32.823656 sshd[4879]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:32.870473 containerd[1597]: time="2025-09-12T17:40:32.870195673Z" level=info msg="CreateContainer within sandbox \"9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4877b20d2d795648b1239dd3fefde7ca9229c351a711bd0d92d5b59a9df75079\"" Sep 12 17:40:32.879440 containerd[1597]: time="2025-09-12T17:40:32.876791232Z" level=info msg="StartContainer for \"4877b20d2d795648b1239dd3fefde7ca9229c351a711bd0d92d5b59a9df75079\"" Sep 12 17:40:32.884465 systemd[1]: sshd@9-159.223.204.96:22-147.75.109.163:42242.service: Deactivated successfully. Sep 12 17:40:32.895816 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:40:32.913563 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:40:32.921483 systemd-logind[1576]: Removed session 10. Sep 12 17:40:33.079433 containerd[1597]: time="2025-09-12T17:40:33.079075255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-8q4cb,Uid:d624f012-d50d-4bd5-9261-9b5b725646ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2\"" Sep 12 17:40:33.222386 containerd[1597]: time="2025-09-12T17:40:33.219409880Z" level=info msg="StartContainer for \"4877b20d2d795648b1239dd3fefde7ca9229c351a711bd0d92d5b59a9df75079\" returns successfully" Sep 12 17:40:33.381126 systemd-networkd[1220]: cali7d5b43b196c: Gained IPv6LL Sep 12 17:40:33.511226 systemd-networkd[1220]: cali140a05ca263: Gained IPv6LL Sep 12 17:40:33.703575 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:33.701706 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:33.701829 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:33.921906 kubelet[2717]: E0912 17:40:33.921711 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:33.988099 kubelet[2717]: I0912 17:40:33.985913 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mpsh9" podStartSLOduration=47.985878766 podStartE2EDuration="47.985878766s" podCreationTimestamp="2025-09-12 17:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:33.950648015 +0000 UTC m=+53.078543705" watchObservedRunningTime="2025-09-12 17:40:33.985878766 +0000 UTC m=+53.113774449" Sep 12 17:40:34.896251 containerd[1597]: time="2025-09-12T17:40:34.896180297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:34.898471 containerd[1597]: time="2025-09-12T17:40:34.898383851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:40:34.899203 containerd[1597]: time="2025-09-12T17:40:34.899076653Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:34.903663 containerd[1597]: time="2025-09-12T17:40:34.903534230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:34.905365 containerd[1597]: time="2025-09-12T17:40:34.905137082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.400107549s" Sep 12 17:40:34.905365 containerd[1597]: time="2025-09-12T17:40:34.905198787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:34.910398 containerd[1597]: time="2025-09-12T17:40:34.908914062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:40:34.911154 containerd[1597]: time="2025-09-12T17:40:34.910634186Z" level=info msg="CreateContainer within sandbox \"6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:34.935081 kubelet[2717]: E0912 17:40:34.935028 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:34.943949 containerd[1597]: time="2025-09-12T17:40:34.941299556Z" level=info msg="CreateContainer within sandbox \"6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ddc83fb79edf940f2401f8f58321cb76aef478264f1d2926a82336afe37fec2\"" Sep 12 17:40:34.946557 containerd[1597]: time="2025-09-12T17:40:34.946506288Z" level=info msg="StartContainer for \"2ddc83fb79edf940f2401f8f58321cb76aef478264f1d2926a82336afe37fec2\"" Sep 12 17:40:35.140925 containerd[1597]: time="2025-09-12T17:40:35.140229692Z" level=info msg="StartContainer for \"2ddc83fb79edf940f2401f8f58321cb76aef478264f1d2926a82336afe37fec2\" returns successfully" Sep 12 17:40:35.952162 kubelet[2717]: E0912 17:40:35.950985 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:40:36.806405 containerd[1597]: time="2025-09-12T17:40:36.806327583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:36.811136 containerd[1597]: time="2025-09-12T17:40:36.811024913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:40:36.813235 containerd[1597]: time="2025-09-12T17:40:36.813142850Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:36.825874 containerd[1597]: time="2025-09-12T17:40:36.823509906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:36.828253 containerd[1597]: time="2025-09-12T17:40:36.828178492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.919194811s" Sep 12 17:40:36.828520 containerd[1597]: time="2025-09-12T17:40:36.828490590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:40:36.833072 containerd[1597]: time="2025-09-12T17:40:36.832361008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:40:36.843304 containerd[1597]: time="2025-09-12T17:40:36.843146360Z" level=info msg="CreateContainer within sandbox \"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:40:36.875840 containerd[1597]: time="2025-09-12T17:40:36.874486587Z" level=info msg="CreateContainer within sandbox \"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d4992464bded010397d4c076c1f56f4e4fa6fae942ab03d768ab263c44bd936\"" Sep 12 17:40:36.879780 containerd[1597]: time="2025-09-12T17:40:36.878206293Z" level=info msg="StartContainer for \"2d4992464bded010397d4c076c1f56f4e4fa6fae942ab03d768ab263c44bd936\"" Sep 12 17:40:36.961248 kubelet[2717]: I0912 17:40:36.961194 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:36.967931 containerd[1597]: time="2025-09-12T17:40:36.967835772Z" level=info msg="StartContainer for \"2d4992464bded010397d4c076c1f56f4e4fa6fae942ab03d768ab263c44bd936\" returns successfully" Sep 12 17:40:37.832759 systemd[1]: Started sshd@10-159.223.204.96:22-147.75.109.163:42250.service - OpenSSH per-connection server daemon (147.75.109.163:42250). Sep 12 17:40:37.977666 sshd[5199]: Accepted publickey for core from 147.75.109.163 port 42250 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:37.982670 sshd[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:38.000058 systemd-logind[1576]: New session 11 of user core. Sep 12 17:40:38.004722 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:40:38.515049 sshd[5199]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:38.521496 systemd[1]: sshd@10-159.223.204.96:22-147.75.109.163:42250.service: Deactivated successfully. Sep 12 17:40:38.528443 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:40:38.529474 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:40:38.532063 systemd-logind[1576]: Removed session 11. Sep 12 17:40:39.655691 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:39.655476 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:39.655548 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:40.598513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374782134.mount: Deactivated successfully. Sep 12 17:40:40.635606 containerd[1597]: time="2025-09-12T17:40:40.635525187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:40.636944 containerd[1597]: time="2025-09-12T17:40:40.636875818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:40:40.659344 containerd[1597]: time="2025-09-12T17:40:40.659212516Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:40.663081 containerd[1597]: time="2025-09-12T17:40:40.662778441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:40.664300 containerd[1597]: time="2025-09-12T17:40:40.664242335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.831157885s" Sep 12 17:40:40.664615 containerd[1597]: time="2025-09-12T17:40:40.664479083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:40:40.729026 containerd[1597]: time="2025-09-12T17:40:40.728115734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:40:40.730767 containerd[1597]: time="2025-09-12T17:40:40.730707454Z" level=info msg="CreateContainer within sandbox \"f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:40:40.764112 containerd[1597]: time="2025-09-12T17:40:40.762655691Z" level=info msg="CreateContainer within sandbox \"f11d8f48205d6b26ea016e4c7567d86f1f41a608c0d87e51748529ea04c9e41b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9e818145fe2232032ea6b2521d781206d0204f17824c7c0c4968770c79e03a0d\"" Sep 12 17:40:40.767046 containerd[1597]: time="2025-09-12T17:40:40.766961481Z" level=info msg="StartContainer for \"9e818145fe2232032ea6b2521d781206d0204f17824c7c0c4968770c79e03a0d\"" Sep 12 17:40:40.953943 containerd[1597]: time="2025-09-12T17:40:40.953758855Z" level=info msg="StartContainer for \"9e818145fe2232032ea6b2521d781206d0204f17824c7c0c4968770c79e03a0d\" returns successfully" Sep 12 17:40:41.041430 kubelet[2717]: I0912 17:40:41.041341 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bd46c69b9-zwwnf" podStartSLOduration=38.973746002 podStartE2EDuration="45.041310614s" podCreationTimestamp="2025-09-12 17:39:56 +0000 UTC" firstStartedPulling="2025-09-12 17:40:28.839422546 +0000 UTC m=+47.967318221" lastFinishedPulling="2025-09-12 17:40:34.906987168 +0000 UTC m=+54.034882833" observedRunningTime="2025-09-12 17:40:35.972169423 +0000 UTC m=+55.100065105" watchObservedRunningTime="2025-09-12 17:40:41.041310614 +0000 UTC m=+60.169206299" Sep 12 17:40:41.237202 containerd[1597]: time="2025-09-12T17:40:41.235168136Z" level=info msg="StopPodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\"" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.405 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d624f012-d50d-4bd5-9261-9b5b725646ef", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2", Pod:"goldmane-7988f88666-8q4cb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali140a05ca263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.408 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.408 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" iface="eth0" netns="" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.408 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.408 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.464 [INFO][5279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.466 [INFO][5279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.466 [INFO][5279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.479 [WARNING][5279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.479 [INFO][5279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.482 [INFO][5279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:41.491914 containerd[1597]: 2025-09-12 17:40:41.487 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.492710 containerd[1597]: time="2025-09-12T17:40:41.491992597Z" level=info msg="TearDown network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" successfully" Sep 12 17:40:41.492710 containerd[1597]: time="2025-09-12T17:40:41.492035566Z" level=info msg="StopPodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" returns successfully" Sep 12 17:40:41.541472 containerd[1597]: time="2025-09-12T17:40:41.541178621Z" level=info msg="RemovePodSandbox for \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\"" Sep 12 17:40:41.545197 containerd[1597]: time="2025-09-12T17:40:41.544926957Z" level=info msg="Forcibly stopping sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\"" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.624 [WARNING][5293] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d624f012-d50d-4bd5-9261-9b5b725646ef", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2", Pod:"goldmane-7988f88666-8q4cb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali140a05ca263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.624 [INFO][5293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.624 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" iface="eth0" netns="" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.624 [INFO][5293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.624 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.659 [INFO][5300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.659 [INFO][5300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.659 [INFO][5300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.670 [WARNING][5300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.670 [INFO][5300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" HandleID="k8s-pod-network.4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Workload="ci--4081.3.6--9--2d91ca838a-k8s-goldmane--7988f88666--8q4cb-eth0" Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.672 [INFO][5300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:41.677816 containerd[1597]: 2025-09-12 17:40:41.675 [INFO][5293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3" Sep 12 17:40:41.678665 containerd[1597]: time="2025-09-12T17:40:41.677892204Z" level=info msg="TearDown network for sandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" successfully" Sep 12 17:40:41.703704 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:41.701017 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:41.701049 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:41.716788 containerd[1597]: time="2025-09-12T17:40:41.716709030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:41.736015 containerd[1597]: time="2025-09-12T17:40:41.735955651Z" level=info msg="RemovePodSandbox \"4db7a283805825af1edc625d6dca6aef8df9173cabdd76bb6ef60f65ed3961b3\" returns successfully" Sep 12 17:40:41.745954 containerd[1597]: time="2025-09-12T17:40:41.745751523Z" level=info msg="StopPodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\"" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.800 [WARNING][5314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bb2647e-616b-4a7b-a0a3-710344efe361", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9", Pod:"calico-apiserver-5bd46c69b9-pfsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc2e088531d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.801 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.801 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" iface="eth0" netns="" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.801 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.801 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.830 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.831 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.831 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.840 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.840 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.842 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:41.848178 containerd[1597]: 2025-09-12 17:40:41.845 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.850237 containerd[1597]: time="2025-09-12T17:40:41.848259394Z" level=info msg="TearDown network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" successfully" Sep 12 17:40:41.850237 containerd[1597]: time="2025-09-12T17:40:41.848315378Z" level=info msg="StopPodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" returns successfully" Sep 12 17:40:41.850237 containerd[1597]: time="2025-09-12T17:40:41.849018749Z" level=info msg="RemovePodSandbox for \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\"" Sep 12 17:40:41.850237 containerd[1597]: time="2025-09-12T17:40:41.849063952Z" level=info msg="Forcibly stopping sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\"" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.913 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bb2647e-616b-4a7b-a0a3-710344efe361", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9", Pod:"calico-apiserver-5bd46c69b9-pfsld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc2e088531d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.914 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.914 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" iface="eth0" netns="" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.914 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.914 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.962 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.962 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.962 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.972 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.972 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" HandleID="k8s-pod-network.71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--pfsld-eth0" Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.975 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:41.982238 containerd[1597]: 2025-09-12 17:40:41.978 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567" Sep 12 17:40:41.982238 containerd[1597]: time="2025-09-12T17:40:41.982123935Z" level=info msg="TearDown network for sandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" successfully" Sep 12 17:40:41.986978 containerd[1597]: time="2025-09-12T17:40:41.986892722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:41.987179 containerd[1597]: time="2025-09-12T17:40:41.987008636Z" level=info msg="RemovePodSandbox \"71c04f5bb0c8e75f3442ad1b50f1fa1801532e92a5323d2b2474291c84d15567\" returns successfully" Sep 12 17:40:41.987776 containerd[1597]: time="2025-09-12T17:40:41.987726281Z" level=info msg="StopPodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\"" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.053 [WARNING][5356] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.054 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.054 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" iface="eth0" netns="" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.054 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.054 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.088 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.088 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.088 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.097 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.097 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.099 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.104782 containerd[1597]: 2025-09-12 17:40:42.102 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.104782 containerd[1597]: time="2025-09-12T17:40:42.104517611Z" level=info msg="TearDown network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" successfully" Sep 12 17:40:42.104782 containerd[1597]: time="2025-09-12T17:40:42.104555492Z" level=info msg="StopPodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" returns successfully" Sep 12 17:40:42.107623 containerd[1597]: time="2025-09-12T17:40:42.106123108Z" level=info msg="RemovePodSandbox for \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\"" Sep 12 17:40:42.107623 containerd[1597]: time="2025-09-12T17:40:42.106163936Z" level=info msg="Forcibly stopping sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\"" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.159 [WARNING][5377] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" WorkloadEndpoint="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.159 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.160 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" iface="eth0" netns="" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.160 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.160 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.214 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.214 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.214 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.227 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.227 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" HandleID="k8s-pod-network.4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Workload="ci--4081.3.6--9--2d91ca838a-k8s-whisker--674688f4df--xgdkq-eth0" Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.233 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.249434 containerd[1597]: 2025-09-12 17:40:42.241 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309" Sep 12 17:40:42.250834 containerd[1597]: time="2025-09-12T17:40:42.249966468Z" level=info msg="TearDown network for sandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" successfully" Sep 12 17:40:42.255166 containerd[1597]: time="2025-09-12T17:40:42.254610356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:42.255166 containerd[1597]: time="2025-09-12T17:40:42.254697029Z" level=info msg="RemovePodSandbox \"4899e9be2429fda5422359592be3d64524feddce1772578dccd30d5c95b42309\" returns successfully" Sep 12 17:40:42.255889 containerd[1597]: time="2025-09-12T17:40:42.255572595Z" level=info msg="StopPodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\"" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.306 [WARNING][5399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95231497-8828-48f6-9eda-7b0dd9295eb8", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25", Pod:"csi-node-driver-n244d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2091b9cac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.307 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.307 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" iface="eth0" netns="" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.307 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.307 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.344 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.344 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.344 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.357 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.357 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.360 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.371049 containerd[1597]: 2025-09-12 17:40:42.367 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.371049 containerd[1597]: time="2025-09-12T17:40:42.370930690Z" level=info msg="TearDown network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" successfully" Sep 12 17:40:42.371049 containerd[1597]: time="2025-09-12T17:40:42.370967458Z" level=info msg="StopPodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" returns successfully" Sep 12 17:40:42.372200 containerd[1597]: time="2025-09-12T17:40:42.372159810Z" level=info msg="RemovePodSandbox for \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\"" Sep 12 17:40:42.372304 containerd[1597]: time="2025-09-12T17:40:42.372211791Z" level=info msg="Forcibly stopping sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\"" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.435 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"95231497-8828-48f6-9eda-7b0dd9295eb8", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25", Pod:"csi-node-driver-n244d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2091b9cac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.436 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.436 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" iface="eth0" netns="" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.436 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.436 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.472 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.475 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.475 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.486 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.486 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" HandleID="k8s-pod-network.6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Workload="ci--4081.3.6--9--2d91ca838a-k8s-csi--node--driver--n244d-eth0" Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.489 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.513881 containerd[1597]: 2025-09-12 17:40:42.508 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41" Sep 12 17:40:42.514437 containerd[1597]: time="2025-09-12T17:40:42.513890167Z" level=info msg="TearDown network for sandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" successfully" Sep 12 17:40:42.536902 containerd[1597]: time="2025-09-12T17:40:42.536014437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:42.536902 containerd[1597]: time="2025-09-12T17:40:42.536327842Z" level=info msg="RemovePodSandbox \"6c98151e4c3d8a3356ec6a94d86d9b28e6c11fe7a5699e84df0ad28042e02d41\" returns successfully" Sep 12 17:40:42.538293 containerd[1597]: time="2025-09-12T17:40:42.538253627Z" level=info msg="StopPodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\"" Sep 12 17:40:42.613137 kubelet[2717]: I0912 17:40:42.612941 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59c457cc56-7stv7" podStartSLOduration=4.061640507 podStartE2EDuration="17.612912547s" podCreationTimestamp="2025-09-12 17:40:25 +0000 UTC" firstStartedPulling="2025-09-12 17:40:27.176417103 +0000 UTC m=+46.304312761" lastFinishedPulling="2025-09-12 17:40:40.727689129 +0000 UTC m=+59.855584801" observedRunningTime="2025-09-12 17:40:41.04504264 +0000 UTC m=+60.172938331" watchObservedRunningTime="2025-09-12 17:40:42.612912547 +0000 UTC m=+61.740808227" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.667 [WARNING][5470] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a573f69-5286-4c75-b8cd-d7019d8e8a47", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9", Pod:"calico-apiserver-5bd46c69b9-zwwnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c3657e946", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.668 [INFO][5470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.668 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" iface="eth0" netns="" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.668 [INFO][5470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.668 [INFO][5470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.700 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.701 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.701 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.709 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.709 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.712 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.717993 containerd[1597]: 2025-09-12 17:40:42.715 [INFO][5470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.717993 containerd[1597]: time="2025-09-12T17:40:42.717555078Z" level=info msg="TearDown network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" successfully" Sep 12 17:40:42.717993 containerd[1597]: time="2025-09-12T17:40:42.717734994Z" level=info msg="StopPodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" returns successfully" Sep 12 17:40:42.720832 containerd[1597]: time="2025-09-12T17:40:42.719805785Z" level=info msg="RemovePodSandbox for \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\"" Sep 12 17:40:42.720832 containerd[1597]: time="2025-09-12T17:40:42.719990756Z" level=info msg="Forcibly stopping sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\"" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.780 [WARNING][5491] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0", GenerateName:"calico-apiserver-5bd46c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a573f69-5286-4c75-b8cd-d7019d8e8a47", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bd46c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"6472acb0e46ef04bb73e52856330d9b6301027963c7525fd99d5e49d2d6759f9", Pod:"calico-apiserver-5bd46c69b9-zwwnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c3657e946", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.782 [INFO][5491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.782 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" iface="eth0" netns="" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.782 [INFO][5491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.782 [INFO][5491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.822 [INFO][5500] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.822 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.822 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.831 [WARNING][5500] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.831 [INFO][5500] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" HandleID="k8s-pod-network.cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--apiserver--5bd46c69b9--zwwnf-eth0" Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.834 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.839546 containerd[1597]: 2025-09-12 17:40:42.836 [INFO][5491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593" Sep 12 17:40:42.839546 containerd[1597]: time="2025-09-12T17:40:42.839087091Z" level=info msg="TearDown network for sandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" successfully" Sep 12 17:40:42.844188 containerd[1597]: time="2025-09-12T17:40:42.844121670Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:42.844317 containerd[1597]: time="2025-09-12T17:40:42.844237048Z" level=info msg="RemovePodSandbox \"cc835ff82da470354c6c77c4f97b92957c8b829ea8ad89cedeffde9c94661593\" returns successfully" Sep 12 17:40:42.845399 containerd[1597]: time="2025-09-12T17:40:42.845035730Z" level=info msg="StopPodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\"" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.934 [WARNING][5515] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0", GenerateName:"calico-kube-controllers-7795d66b4-", Namespace:"calico-system", SelfLink:"", UID:"4b1dbfd9-0d2b-4616-9684-f70423a56727", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7795d66b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22", Pod:"calico-kube-controllers-7795d66b4-bkptg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae9de97ccbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.934 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.934 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" iface="eth0" netns="" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.934 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.934 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.968 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.968 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.968 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.982 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.982 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.987 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:42.994237 containerd[1597]: 2025-09-12 17:40:42.989 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:42.994801 containerd[1597]: time="2025-09-12T17:40:42.994314052Z" level=info msg="TearDown network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" successfully" Sep 12 17:40:42.994801 containerd[1597]: time="2025-09-12T17:40:42.994354680Z" level=info msg="StopPodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" returns successfully" Sep 12 17:40:42.997486 containerd[1597]: time="2025-09-12T17:40:42.997151142Z" level=info msg="RemovePodSandbox for \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\"" Sep 12 17:40:42.997486 containerd[1597]: time="2025-09-12T17:40:42.997196018Z" level=info msg="Forcibly stopping sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\"" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.065 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0", GenerateName:"calico-kube-controllers-7795d66b4-", Namespace:"calico-system", SelfLink:"", UID:"4b1dbfd9-0d2b-4616-9684-f70423a56727", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7795d66b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22", Pod:"calico-kube-controllers-7795d66b4-bkptg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae9de97ccbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.066 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.066 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" iface="eth0" netns="" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.066 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.066 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.100 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.100 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.100 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.110 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.110 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" HandleID="k8s-pod-network.3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Workload="ci--4081.3.6--9--2d91ca838a-k8s-calico--kube--controllers--7795d66b4--bkptg-eth0" Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.114 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:43.119463 containerd[1597]: 2025-09-12 17:40:43.116 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978" Sep 12 17:40:43.121208 containerd[1597]: time="2025-09-12T17:40:43.119963791Z" level=info msg="TearDown network for sandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" successfully" Sep 12 17:40:43.124718 containerd[1597]: time="2025-09-12T17:40:43.124669719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:43.124957 containerd[1597]: time="2025-09-12T17:40:43.124937572Z" level=info msg="RemovePodSandbox \"3bcec1599a5058b55798692a7e79a3bb8839d8c28f79bac118d671f180f54978\" returns successfully" Sep 12 17:40:43.125552 containerd[1597]: time="2025-09-12T17:40:43.125529094Z" level=info msg="StopPodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\"" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.200 [WARNING][5558] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"beb36fc2-c828-43e2-90d6-9cffbe7e8f94", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c", Pod:"coredns-7c65d6cfc9-vsvr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26ec317096b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.200 [INFO][5558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.200 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" iface="eth0" netns="" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.200 [INFO][5558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.200 [INFO][5558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.251 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.251 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.251 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.260 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.260 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.264 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:43.272711 containerd[1597]: 2025-09-12 17:40:43.268 [INFO][5558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.274587 containerd[1597]: time="2025-09-12T17:40:43.274513147Z" level=info msg="TearDown network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" successfully" Sep 12 17:40:43.274835 containerd[1597]: time="2025-09-12T17:40:43.274732402Z" level=info msg="StopPodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" returns successfully" Sep 12 17:40:43.275927 containerd[1597]: time="2025-09-12T17:40:43.275656406Z" level=info msg="RemovePodSandbox for \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\"" Sep 12 17:40:43.275927 containerd[1597]: time="2025-09-12T17:40:43.275714970Z" level=info msg="Forcibly stopping sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\"" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.340 [WARNING][5579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"beb36fc2-c828-43e2-90d6-9cffbe7e8f94", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"53cc2c3a15fc55cd7ae25516b20960b38b3945d6068ba0075528ae2f5256c70c", Pod:"coredns-7c65d6cfc9-vsvr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26ec317096b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.340 [INFO][5579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.340 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" iface="eth0" netns="" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.340 [INFO][5579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.340 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.397 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.397 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.401 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.417 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.417 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" HandleID="k8s-pod-network.bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--vsvr8-eth0" Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.420 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:43.425711 containerd[1597]: 2025-09-12 17:40:43.422 [INFO][5579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102" Sep 12 17:40:43.427096 containerd[1597]: time="2025-09-12T17:40:43.425940559Z" level=info msg="TearDown network for sandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" successfully" Sep 12 17:40:43.431888 containerd[1597]: time="2025-09-12T17:40:43.431092674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:43.431888 containerd[1597]: time="2025-09-12T17:40:43.431232498Z" level=info msg="RemovePodSandbox \"bc41f1df5fd040703fdc83e56a804503549ec8a543d43f16cdf2b3f090a79102\" returns successfully" Sep 12 17:40:43.431888 containerd[1597]: time="2025-09-12T17:40:43.431869607Z" level=info msg="StopPodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\"" Sep 12 17:40:43.531152 systemd[1]: Started sshd@11-159.223.204.96:22-147.75.109.163:41746.service - OpenSSH per-connection server daemon (147.75.109.163:41746). Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.489 [WARNING][5601] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d82ad19b-3db7-4b09-ad0d-31652f615ba5", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c", Pod:"coredns-7c65d6cfc9-mpsh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d5b43b196c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.489 [INFO][5601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.489 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" iface="eth0" netns="" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.489 [INFO][5601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.489 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.535 [INFO][5609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.535 [INFO][5609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.535 [INFO][5609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.552 [WARNING][5609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.552 [INFO][5609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.558 [INFO][5609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:43.573189 containerd[1597]: 2025-09-12 17:40:43.563 [INFO][5601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.573189 containerd[1597]: time="2025-09-12T17:40:43.572761620Z" level=info msg="TearDown network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" successfully" Sep 12 17:40:43.573189 containerd[1597]: time="2025-09-12T17:40:43.572798110Z" level=info msg="StopPodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" returns successfully" Sep 12 17:40:43.574504 containerd[1597]: time="2025-09-12T17:40:43.573918090Z" level=info msg="RemovePodSandbox for \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\"" Sep 12 17:40:43.574504 containerd[1597]: time="2025-09-12T17:40:43.573951700Z" level=info msg="Forcibly stopping sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\"" Sep 12 17:40:43.678626 sshd[5613]: Accepted publickey for core from 147.75.109.163 port 41746 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:43.683448 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:43.693246 systemd-logind[1576]: New session 12 of user core. Sep 12 17:40:43.699000 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.674 [WARNING][5625] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d82ad19b-3db7-4b09-ad0d-31652f615ba5", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-9-2d91ca838a", ContainerID:"9ae1dcfd03be4d6a320d45b48b712c146a0e186cf9668242a0010d7b7c7ce55c", Pod:"coredns-7c65d6cfc9-mpsh9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d5b43b196c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.675 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.675 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" iface="eth0" netns="" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.675 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.675 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.738 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.738 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.739 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.748 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.748 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" HandleID="k8s-pod-network.0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Workload="ci--4081.3.6--9--2d91ca838a-k8s-coredns--7c65d6cfc9--mpsh9-eth0" Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.751 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:43.759233 containerd[1597]: 2025-09-12 17:40:43.756 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be" Sep 12 17:40:43.761617 containerd[1597]: time="2025-09-12T17:40:43.760222736Z" level=info msg="TearDown network for sandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" successfully" Sep 12 17:40:43.811172 containerd[1597]: time="2025-09-12T17:40:43.810542892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:43.811172 containerd[1597]: time="2025-09-12T17:40:43.810628391Z" level=info msg="RemovePodSandbox \"0358770db164a6f207ade96632efc3edf21ef76a1384c4bc161871786df165be\" returns successfully" Sep 12 17:40:44.668155 sshd[5613]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:44.682396 systemd[1]: Started sshd@12-159.223.204.96:22-147.75.109.163:41748.service - OpenSSH per-connection server daemon (147.75.109.163:41748). Sep 12 17:40:44.687182 systemd[1]: sshd@11-159.223.204.96:22-147.75.109.163:41746.service: Deactivated successfully. Sep 12 17:40:44.704261 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:40:44.707038 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:40:44.710263 systemd-logind[1576]: Removed session 12. Sep 12 17:40:44.769171 sshd[5653]: Accepted publickey for core from 147.75.109.163 port 41748 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:44.772517 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:44.783910 systemd-logind[1576]: New session 13 of user core. Sep 12 17:40:44.791146 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:40:45.013284 kubelet[2717]: I0912 17:40:45.013065 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:45.463921 sshd[5653]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:45.471437 systemd[1]: Started sshd@13-159.223.204.96:22-147.75.109.163:41758.service - OpenSSH per-connection server daemon (147.75.109.163:41758). Sep 12 17:40:45.485652 systemd[1]: sshd@12-159.223.204.96:22-147.75.109.163:41748.service: Deactivated successfully. Sep 12 17:40:45.510800 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:40:45.515774 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:40:45.519096 systemd-logind[1576]: Removed session 13. Sep 12 17:40:45.659806 sshd[5668]: Accepted publickey for core from 147.75.109.163 port 41758 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:45.665893 sshd[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:45.676435 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:45.670160 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:45.670225 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:45.681946 systemd-logind[1576]: New session 14 of user core. Sep 12 17:40:45.689070 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:40:46.117239 sshd[5668]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:46.125231 systemd[1]: sshd@13-159.223.204.96:22-147.75.109.163:41758.service: Deactivated successfully. Sep 12 17:40:46.135443 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:40:46.137483 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:40:46.140982 systemd-logind[1576]: Removed session 14. Sep 12 17:40:46.603555 containerd[1597]: time="2025-09-12T17:40:46.603268815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:46.606284 containerd[1597]: time="2025-09-12T17:40:46.605763489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:40:46.607373 containerd[1597]: time="2025-09-12T17:40:46.607282501Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:46.619028 containerd[1597]: time="2025-09-12T17:40:46.618281550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:46.619028 containerd[1597]: time="2025-09-12T17:40:46.618812305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.88981488s" Sep 12 17:40:46.619028 containerd[1597]: time="2025-09-12T17:40:46.618870465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:40:46.622517 containerd[1597]: time="2025-09-12T17:40:46.622467611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:46.653965 containerd[1597]: time="2025-09-12T17:40:46.651664528Z" level=info msg="CreateContainer within sandbox \"b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:40:46.737964 containerd[1597]: time="2025-09-12T17:40:46.737420631Z" level=info msg="CreateContainer within sandbox \"b88c7132c68527d46cd6276b8d8870e4c5eb9beb05c0251e3d34b87e9e5d7d22\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fc2536f76647b37bf46e12019b0037fa99844250d55952b8da6f84b74f185fd7\"" Sep 12 17:40:46.738486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30156570.mount: Deactivated successfully. Sep 12 17:40:46.741232 containerd[1597]: time="2025-09-12T17:40:46.740902763Z" level=info msg="StartContainer for \"fc2536f76647b37bf46e12019b0037fa99844250d55952b8da6f84b74f185fd7\"" Sep 12 17:40:47.050642 containerd[1597]: time="2025-09-12T17:40:47.047166447Z" level=info msg="StartContainer for \"fc2536f76647b37bf46e12019b0037fa99844250d55952b8da6f84b74f185fd7\" returns successfully" Sep 12 17:40:47.098474 containerd[1597]: time="2025-09-12T17:40:47.098369815Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:47.099511 containerd[1597]: time="2025-09-12T17:40:47.099319267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:40:47.111355 containerd[1597]: time="2025-09-12T17:40:47.111294240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 488.772022ms" Sep 12 17:40:47.111355 containerd[1597]: time="2025-09-12T17:40:47.111341244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:47.118735 containerd[1597]: time="2025-09-12T17:40:47.118252712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:40:47.142001 containerd[1597]: time="2025-09-12T17:40:47.141182997Z" level=info msg="CreateContainer within sandbox \"ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:47.261663 containerd[1597]: time="2025-09-12T17:40:47.259953821Z" level=info msg="CreateContainer within sandbox \"ad26abecdc8db5341661acab9071bec67d4d8b745794e90fa3e996d61f5ccaf9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad9b5f93f97dc79129a2a73b1fd1e459c2405a6074f6e61b864f4ac41ebd9027\"" Sep 12 17:40:47.264379 containerd[1597]: time="2025-09-12T17:40:47.263662725Z" level=info msg="StartContainer for \"ad9b5f93f97dc79129a2a73b1fd1e459c2405a6074f6e61b864f4ac41ebd9027\"" Sep 12 17:40:47.434985 kubelet[2717]: I0912 17:40:47.413547 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7795d66b4-bkptg" podStartSLOduration=31.623224196 podStartE2EDuration="46.400531869s" podCreationTimestamp="2025-09-12 17:40:01 +0000 UTC" firstStartedPulling="2025-09-12 17:40:31.84350091 +0000 UTC m=+50.971396566" lastFinishedPulling="2025-09-12 17:40:46.620808568 +0000 UTC m=+65.748704239" observedRunningTime="2025-09-12 17:40:47.215548362 +0000 UTC m=+66.343444041" watchObservedRunningTime="2025-09-12 17:40:47.400531869 +0000 UTC m=+66.528427561" Sep 12 17:40:47.470008 containerd[1597]: time="2025-09-12T17:40:47.469923992Z" level=info msg="StartContainer for \"ad9b5f93f97dc79129a2a73b1fd1e459c2405a6074f6e61b864f4ac41ebd9027\" returns successfully" Sep 12 17:40:47.720093 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:47.717177 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:47.717207 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:49.773061 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:49.768166 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:49.768226 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:50.239073 kubelet[2717]: I0912 17:40:50.236104 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bd46c69b9-pfsld" podStartSLOduration=39.133170931 podStartE2EDuration="54.219327271s" podCreationTimestamp="2025-09-12 17:39:56 +0000 UTC" firstStartedPulling="2025-09-12 17:40:32.026091782 +0000 UTC m=+51.153987456" lastFinishedPulling="2025-09-12 17:40:47.11224814 +0000 UTC m=+66.240143796" observedRunningTime="2025-09-12 17:40:48.230647185 +0000 UTC m=+67.358542878" watchObservedRunningTime="2025-09-12 17:40:50.219327271 +0000 UTC m=+69.347222951" Sep 12 17:40:51.138409 systemd[1]: Started sshd@14-159.223.204.96:22-147.75.109.163:56328.service - OpenSSH per-connection server daemon (147.75.109.163:56328). Sep 12 17:40:51.164724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147486219.mount: Deactivated successfully. Sep 12 17:40:51.345259 sshd[5800]: Accepted publickey for core from 147.75.109.163 port 56328 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:51.346735 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:51.356268 systemd-logind[1576]: New session 15 of user core. Sep 12 17:40:51.360069 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:40:51.819261 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:51.814309 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:51.814318 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:52.252299 sshd[5800]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:52.284349 systemd[1]: sshd@14-159.223.204.96:22-147.75.109.163:56328.service: Deactivated successfully. Sep 12 17:40:52.307629 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:40:52.309399 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:40:52.318148 systemd-logind[1576]: Removed session 15. Sep 12 17:40:52.876714 containerd[1597]: time="2025-09-12T17:40:52.876439518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:52.887913 containerd[1597]: time="2025-09-12T17:40:52.886728842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:40:52.904335 containerd[1597]: time="2025-09-12T17:40:52.904277881Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:52.908767 containerd[1597]: time="2025-09-12T17:40:52.908110727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:52.909884 containerd[1597]: time="2025-09-12T17:40:52.909759126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.791365517s" Sep 12 17:40:52.910044 containerd[1597]: time="2025-09-12T17:40:52.910020078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:40:52.974451 containerd[1597]: time="2025-09-12T17:40:52.945618873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:40:53.025371 containerd[1597]: time="2025-09-12T17:40:53.025307294Z" level=info msg="CreateContainer within sandbox \"a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:40:53.062585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180879651.mount: Deactivated successfully. Sep 12 17:40:53.089067 containerd[1597]: time="2025-09-12T17:40:53.088986753Z" level=info msg="CreateContainer within sandbox \"a9a2da97728b7afcf93fe1a90026af3bf5f85126ca7eedc4fd65d0e353886cb2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e911931d23155e13fe11c111cfb90ba60f5422f1d0559d3d0ad7bd77aed63d48\"" Sep 12 17:40:53.091813 containerd[1597]: time="2025-09-12T17:40:53.091736068Z" level=info msg="StartContainer for \"e911931d23155e13fe11c111cfb90ba60f5422f1d0559d3d0ad7bd77aed63d48\"" Sep 12 17:40:53.289398 systemd[1]: run-containerd-runc-k8s.io-e911931d23155e13fe11c111cfb90ba60f5422f1d0559d3d0ad7bd77aed63d48-runc.cbi8Uj.mount: Deactivated successfully. Sep 12 17:40:53.389070 containerd[1597]: time="2025-09-12T17:40:53.388962021Z" level=info msg="StartContainer for \"e911931d23155e13fe11c111cfb90ba60f5422f1d0559d3d0ad7bd77aed63d48\" returns successfully" Sep 12 17:40:53.701839 kubelet[2717]: I0912 17:40:53.695508 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-8q4cb" podStartSLOduration=33.861959063 podStartE2EDuration="53.683528781s" podCreationTimestamp="2025-09-12 17:40:00 +0000 UTC" firstStartedPulling="2025-09-12 17:40:33.109426059 +0000 UTC m=+52.237321740" lastFinishedPulling="2025-09-12 17:40:52.930995788 +0000 UTC m=+72.058891458" observedRunningTime="2025-09-12 17:40:53.656038084 +0000 UTC m=+72.783933758" watchObservedRunningTime="2025-09-12 17:40:53.683528781 +0000 UTC m=+72.811424465" Sep 12 17:40:53.863935 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:53.861207 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:53.861248 systemd-resolved[1483]: Flushed all caches. Sep 12 17:40:55.506985 containerd[1597]: time="2025-09-12T17:40:55.506836066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.509917 containerd[1597]: time="2025-09-12T17:40:55.509561899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:40:55.512361 containerd[1597]: time="2025-09-12T17:40:55.511345058Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.514915 containerd[1597]: time="2025-09-12T17:40:55.514837045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:55.516792 containerd[1597]: time="2025-09-12T17:40:55.516739319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.543611665s" Sep 12 17:40:55.517082 containerd[1597]: time="2025-09-12T17:40:55.517054012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:40:55.540011 containerd[1597]: time="2025-09-12T17:40:55.539917069Z" level=info msg="CreateContainer within sandbox \"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:40:55.632091 containerd[1597]: time="2025-09-12T17:40:55.631900086Z" level=info msg="CreateContainer within sandbox \"921de3d255e35b4cfca7eac759065c3e16ac4d1c08193b94eb741d343a803f25\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"da793516deea5dc742dcde81bb0abe851a89b3f8b494ab7a738296882b0b34b2\"" Sep 12 17:40:55.634796 containerd[1597]: time="2025-09-12T17:40:55.633598263Z" level=info msg="StartContainer for \"da793516deea5dc742dcde81bb0abe851a89b3f8b494ab7a738296882b0b34b2\"" Sep 12 17:40:55.790534 containerd[1597]: time="2025-09-12T17:40:55.789465245Z" level=info msg="StartContainer for \"da793516deea5dc742dcde81bb0abe851a89b3f8b494ab7a738296882b0b34b2\" returns successfully" Sep 12 17:40:56.463939 kubelet[2717]: I0912 17:40:56.451481 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:40:56.466403 kubelet[2717]: I0912 17:40:56.466330 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:40:57.264012 systemd[1]: Started sshd@15-159.223.204.96:22-147.75.109.163:56342.service - OpenSSH per-connection server daemon (147.75.109.163:56342). Sep 12 17:40:57.430970 sshd[5949]: Accepted publickey for core from 147.75.109.163 port 56342 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:57.434449 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:57.442394 systemd-logind[1576]: New session 16 of user core. Sep 12 17:40:57.448409 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:40:58.514672 sshd[5949]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.527629 systemd[1]: sshd@15-159.223.204.96:22-147.75.109.163:56342.service: Deactivated successfully. Sep 12 17:40:58.536253 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:40:58.536475 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:40:58.546661 systemd-logind[1576]: Removed session 16. Sep 12 17:40:59.688266 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:40:59.688046 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:40:59.688114 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:03.537416 systemd[1]: Started sshd@16-159.223.204.96:22-147.75.109.163:49672.service - OpenSSH per-connection server daemon (147.75.109.163:49672). Sep 12 17:41:03.762955 sshd[5982]: Accepted publickey for core from 147.75.109.163 port 49672 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:03.768235 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:03.787975 systemd-logind[1576]: New session 17 of user core. Sep 12 17:41:03.799327 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:41:04.708200 sshd[5982]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:04.715705 systemd[1]: sshd@16-159.223.204.96:22-147.75.109.163:49672.service: Deactivated successfully. Sep 12 17:41:04.716359 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:41:04.723970 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:41:04.725995 systemd-logind[1576]: Removed session 17. Sep 12 17:41:05.724828 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:41:05.721445 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:41:05.721456 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:09.720424 systemd[1]: Started sshd@17-159.223.204.96:22-147.75.109.163:49682.service - OpenSSH per-connection server daemon (147.75.109.163:49682). Sep 12 17:41:09.799333 sshd[5996]: Accepted publickey for core from 147.75.109.163 port 49682 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:09.801613 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:09.810142 systemd-logind[1576]: New session 18 of user core. Sep 12 17:41:09.819281 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:41:10.051059 sshd[5996]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:10.065516 systemd[1]: Started sshd@18-159.223.204.96:22-147.75.109.163:49502.service - OpenSSH per-connection server daemon (147.75.109.163:49502). Sep 12 17:41:10.068659 systemd[1]: sshd@17-159.223.204.96:22-147.75.109.163:49682.service: Deactivated successfully. Sep 12 17:41:10.077382 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:41:10.081412 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:41:10.084292 systemd-logind[1576]: Removed session 18. Sep 12 17:41:10.148427 sshd[6007]: Accepted publickey for core from 147.75.109.163 port 49502 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:10.152276 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:10.163533 systemd-logind[1576]: New session 19 of user core. Sep 12 17:41:10.170612 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:41:10.628466 sshd[6007]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:10.636417 systemd[1]: Started sshd@19-159.223.204.96:22-147.75.109.163:49514.service - OpenSSH per-connection server daemon (147.75.109.163:49514). Sep 12 17:41:10.652241 systemd[1]: sshd@18-159.223.204.96:22-147.75.109.163:49502.service: Deactivated successfully. Sep 12 17:41:10.665258 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:41:10.667530 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:41:10.673718 systemd-logind[1576]: Removed session 19. Sep 12 17:41:10.745095 sshd[6018]: Accepted publickey for core from 147.75.109.163 port 49514 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:10.748216 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:10.786641 systemd-logind[1576]: New session 20 of user core. Sep 12 17:41:10.791564 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:41:13.734081 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:41:13.732885 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:41:13.732908 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:14.088592 sshd[6018]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:14.107491 systemd[1]: Started sshd@20-159.223.204.96:22-147.75.109.163:49516.service - OpenSSH per-connection server daemon (147.75.109.163:49516). Sep 12 17:41:14.131418 systemd[1]: sshd@19-159.223.204.96:22-147.75.109.163:49514.service: Deactivated successfully. Sep 12 17:41:14.181894 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:41:14.181995 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:41:14.196971 systemd-logind[1576]: Removed session 20. Sep 12 17:41:14.300724 sshd[6065]: Accepted publickey for core from 147.75.109.163 port 49516 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:14.307235 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:14.322885 systemd-logind[1576]: New session 21 of user core. Sep 12 17:41:14.332277 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:41:14.961646 systemd[1]: run-containerd-runc-k8s.io-e911931d23155e13fe11c111cfb90ba60f5422f1d0559d3d0ad7bd77aed63d48-runc.PUheQk.mount: Deactivated successfully. Sep 12 17:41:15.757376 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:41:15.750440 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:41:15.750574 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:15.937950 kubelet[2717]: I0912 17:41:15.912704 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n244d" podStartSLOduration=48.222184179 podStartE2EDuration="1m14.862487099s" podCreationTimestamp="2025-09-12 17:40:01 +0000 UTC" firstStartedPulling="2025-09-12 17:40:28.888413795 +0000 UTC m=+48.016309485" lastFinishedPulling="2025-09-12 17:40:55.528716717 +0000 UTC m=+74.656612405" observedRunningTime="2025-09-12 17:40:56.86747545 +0000 UTC m=+75.995371135" watchObservedRunningTime="2025-09-12 17:41:15.862487099 +0000 UTC m=+94.990382786" Sep 12 17:41:16.154528 kubelet[2717]: E0912 17:41:16.152889 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:41:16.302607 sshd[6065]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:16.325042 systemd[1]: sshd@20-159.223.204.96:22-147.75.109.163:49516.service: Deactivated successfully. Sep 12 17:41:16.337998 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:41:16.347996 systemd[1]: Started sshd@21-159.223.204.96:22-147.75.109.163:49532.service - OpenSSH per-connection server daemon (147.75.109.163:49532). Sep 12 17:41:16.349526 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:41:16.352403 systemd-logind[1576]: Removed session 21. Sep 12 17:41:16.456921 sshd[6134]: Accepted publickey for core from 147.75.109.163 port 49532 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:16.459053 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:16.466911 systemd-logind[1576]: New session 22 of user core. Sep 12 17:41:16.474453 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:41:16.794410 sshd[6134]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:16.800605 systemd[1]: sshd@21-159.223.204.96:22-147.75.109.163:49532.service: Deactivated successfully. Sep 12 17:41:16.808420 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:41:16.809488 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:41:16.813957 systemd-logind[1576]: Removed session 22. Sep 12 17:41:17.127694 kubelet[2717]: E0912 17:41:17.127500 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:41:19.130894 kubelet[2717]: E0912 17:41:19.128655 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:41:20.127152 kubelet[2717]: E0912 17:41:20.127092 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:41:21.811894 systemd[1]: Started sshd@22-159.223.204.96:22-147.75.109.163:58402.service - OpenSSH per-connection server daemon (147.75.109.163:58402). Sep 12 17:41:21.957482 sshd[6153]: Accepted publickey for core from 147.75.109.163 port 58402 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:21.961758 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:21.972518 systemd-logind[1576]: New session 23 of user core. Sep 12 17:41:21.981547 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:41:22.140793 kubelet[2717]: E0912 17:41:22.140636 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 12 17:41:22.259453 sshd[6153]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:22.272593 systemd[1]: sshd@22-159.223.204.96:22-147.75.109.163:58402.service: Deactivated successfully. Sep 12 17:41:22.276595 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:41:22.281634 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:41:22.283548 systemd-logind[1576]: Removed session 23. Sep 12 17:41:25.672328 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:41:25.669137 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:41:25.669147 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:27.275400 systemd[1]: Started sshd@23-159.223.204.96:22-147.75.109.163:58412.service - OpenSSH per-connection server daemon (147.75.109.163:58412). Sep 12 17:41:27.554209 sshd[6166]: Accepted publickey for core from 147.75.109.163 port 58412 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:27.557810 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:27.577293 systemd-logind[1576]: New session 24 of user core. Sep 12 17:41:27.587341 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:41:28.018151 sshd[6166]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:28.026392 systemd[1]: sshd@23-159.223.204.96:22-147.75.109.163:58412.service: Deactivated successfully. Sep 12 17:41:28.040336 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:41:28.042371 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:41:28.049639 systemd-logind[1576]: Removed session 24. Sep 12 17:41:31.688308 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:41:31.687096 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:41:31.687108 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:33.046296 systemd[1]: Started sshd@24-159.223.204.96:22-147.75.109.163:57246.service - OpenSSH per-connection server daemon (147.75.109.163:57246). Sep 12 17:41:33.232632 sshd[6201]: Accepted publickey for core from 147.75.109.163 port 57246 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:33.236609 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:33.255398 systemd-logind[1576]: New session 25 of user core. Sep 12 17:41:33.263303 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:41:33.743127 systemd-journald[1141]: Under memory pressure, flushing caches. Sep 12 17:41:33.741227 systemd-resolved[1483]: Under memory pressure, flushing caches. Sep 12 17:41:33.741243 systemd-resolved[1483]: Flushed all caches. Sep 12 17:41:34.037210 sshd[6201]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:34.046554 systemd-logind[1576]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:41:34.046876 systemd[1]: sshd@24-159.223.204.96:22-147.75.109.163:57246.service: Deactivated successfully. Sep 12 17:41:34.058504 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:41:34.062281 systemd-logind[1576]: Removed session 25.