Sep 13 00:24:51.094695 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:24:51.094742 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:24:51.094762 kernel: BIOS-provided physical RAM map: Sep 13 00:24:51.094777 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 13 00:24:51.094791 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 13 00:24:51.094806 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 13 00:24:51.094824 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 13 00:24:51.094843 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 13 00:24:51.094858 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 13 00:24:51.094873 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 13 00:24:51.094892 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 13 00:24:51.094907 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 13 00:24:51.094944 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 13 00:24:51.095035 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 13 00:24:51.095069 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 13 00:24:51.095087 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 13 00:24:51.095104 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 13 00:24:51.095120 kernel: NX (Execute Disable) protection: active Sep 13 00:24:51.095136 kernel: APIC: Static calls initialized Sep 13 00:24:51.095153 kernel: efi: EFI v2.7 by EDK II Sep 13 00:24:51.095170 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 13 00:24:51.095187 kernel: SMBIOS 2.4 present. Sep 13 00:24:51.095204 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 13 00:24:51.095221 kernel: Hypervisor detected: KVM Sep 13 00:24:51.095241 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:24:51.095258 kernel: kvm-clock: using sched offset of 12451718288 cycles Sep 13 00:24:51.095276 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:24:51.095293 kernel: tsc: Detected 2299.998 MHz processor Sep 13 00:24:51.095310 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:24:51.095327 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:24:51.095344 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 13 00:24:51.095361 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 13 00:24:51.095377 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:24:51.095398 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 13 00:24:51.095414 kernel: Using GB pages for direct mapping Sep 13 00:24:51.095431 kernel: Secure boot disabled Sep 13 00:24:51.095447 kernel: ACPI: Early table checksum verification disabled Sep 13 00:24:51.095464 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 13 00:24:51.095481 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 13 00:24:51.095498 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 13 00:24:51.095521 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 13 00:24:51.095542 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 13 00:24:51.095560 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 13 00:24:51.095579 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 13 00:24:51.095596 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 13 00:24:51.095615 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 13 00:24:51.095633 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 13 00:24:51.095654 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 13 00:24:51.095672 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 13 00:24:51.095689 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 13 00:24:51.095707 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 13 00:24:51.095725 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 13 00:24:51.095742 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 13 00:24:51.095760 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 13 00:24:51.095778 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 13 00:24:51.095795 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 13 00:24:51.095816 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 13 00:24:51.095834 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:24:51.095852 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:24:51.095869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:24:51.095888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 13 00:24:51.095906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 13 00:24:51.095924 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 13 00:24:51.095942 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 13 00:24:51.096082 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 13 00:24:51.096108 kernel: Zone ranges: Sep 13 00:24:51.096127 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:24:51.096144 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:24:51.096161 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:24:51.096187 kernel: Movable zone start for each node Sep 13 00:24:51.096205 kernel: Early memory node ranges Sep 13 00:24:51.096223 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 13 00:24:51.096241 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 13 00:24:51.096258 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 13 00:24:51.096280 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 13 00:24:51.096297 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:24:51.096314 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 13 00:24:51.096331 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:24:51.096349 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 13 00:24:51.096366 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 13 00:24:51.096390 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:24:51.096408 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 13 00:24:51.096430 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:24:51.096452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:24:51.096469 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:24:51.096487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:24:51.096504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:24:51.096522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:24:51.096539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:24:51.096557 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:24:51.096574 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:24:51.096598 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:24:51.096619 kernel: Booting paravirtualized kernel on KVM Sep 13 00:24:51.096637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:24:51.096655 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:24:51.096673 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:24:51.096691 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:24:51.096708 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:24:51.096725 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:24:51.096742 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:24:51.096760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:24:51.096783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:24:51.096800 kernel: random: crng init done Sep 13 00:24:51.096824 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:24:51.096842 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:24:51.096859 kernel: Fallback order for Node 0: 0 Sep 13 00:24:51.096878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 13 00:24:51.096895 kernel: Policy zone: Normal Sep 13 00:24:51.096912 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:24:51.096933 kernel: software IO TLB: area num 2. Sep 13 00:24:51.096950 kernel: Memory: 7513400K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346924K reserved, 0K cma-reserved) Sep 13 00:24:51.096983 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:24:51.097000 kernel: Kernel/User page tables isolation: enabled Sep 13 00:24:51.097025 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:24:51.097043 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:24:51.097066 kernel: Dynamic Preempt: voluntary Sep 13 00:24:51.097083 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:24:51.097103 kernel: rcu: RCU event tracing is enabled. Sep 13 00:24:51.097139 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:24:51.097157 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:24:51.097176 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:24:51.097198 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:24:51.097217 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:24:51.097243 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:24:51.097262 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:24:51.097281 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:24:51.097301 kernel: Console: colour dummy device 80x25 Sep 13 00:24:51.097323 kernel: printk: console [ttyS0] enabled Sep 13 00:24:51.097342 kernel: ACPI: Core revision 20230628 Sep 13 00:24:51.097360 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:24:51.097379 kernel: x2apic enabled Sep 13 00:24:51.097397 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:24:51.097436 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 13 00:24:51.097457 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:24:51.097476 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 13 00:24:51.097499 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 13 00:24:51.097518 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 13 00:24:51.097537 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:24:51.097556 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:24:51.097575 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:24:51.097594 kernel: Spectre V2 : Mitigation: IBRS Sep 13 00:24:51.097613 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:24:51.097632 kernel: RETBleed: Mitigation: IBRS Sep 13 00:24:51.097651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:24:51.097673 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 13 00:24:51.097692 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:24:51.097711 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:24:51.097730 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:24:51.097749 kernel: active return thunk: its_return_thunk Sep 13 00:24:51.097768 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:24:51.097787 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:24:51.097806 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:24:51.097825 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:24:51.097847 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:24:51.097866 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:24:51.097885 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:24:51.097904 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:24:51.097922 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:24:51.097947 kernel: landlock: Up and running. Sep 13 00:24:51.097978 kernel: SELinux: Initializing. Sep 13 00:24:51.097997 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.098016 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.098040 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 13 00:24:51.098064 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:24:51.098083 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:24:51.098103 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:24:51.098122 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 13 00:24:51.098141 kernel: signal: max sigframe size: 1776 Sep 13 00:24:51.098159 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:24:51.098179 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:24:51.098197 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:24:51.098221 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:24:51.098239 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:24:51.098258 kernel: .... node #0, CPUs: #1 Sep 13 00:24:51.098278 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:24:51.098298 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:24:51.098316 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:24:51.098335 kernel: smpboot: Max logical packages: 1 Sep 13 00:24:51.098353 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 13 00:24:51.098376 kernel: devtmpfs: initialized Sep 13 00:24:51.098395 kernel: x86/mm: Memory block size: 128MB Sep 13 00:24:51.098414 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 13 00:24:51.098433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:24:51.098452 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:24:51.098471 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:24:51.098490 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:24:51.098509 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:24:51.098528 kernel: audit: type=2000 audit(1757723090.072:1): state=initialized audit_enabled=0 res=1 Sep 13 00:24:51.098550 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:24:51.098570 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:24:51.098589 kernel: cpuidle: using governor menu Sep 13 00:24:51.098608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:24:51.098626 kernel: dca service started, version 1.12.1 Sep 13 00:24:51.098645 kernel: PCI: Using configuration type 1 for base access Sep 13 00:24:51.098664 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:24:51.098683 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:24:51.098702 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:24:51.098724 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:24:51.098744 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:24:51.098762 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:24:51.098782 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:24:51.098800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:24:51.098820 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:24:51.098838 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:24:51.098857 kernel: ACPI: Interpreter enabled Sep 13 00:24:51.098876 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:24:51.098898 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:24:51.098918 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:24:51.098981 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:24:51.099022 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 13 00:24:51.099045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:24:51.099344 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:24:51.099562 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:24:51.099756 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:24:51.099781 kernel: PCI host bridge to bus 0000:00 Sep 13 00:24:51.100216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:24:51.100414 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:24:51.100576 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:24:51.100733 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 13 00:24:51.100890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:24:51.102770 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:24:51.103261 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 13 00:24:51.103458 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:24:51.103638 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:24:51.103826 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 13 00:24:51.104026 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 13 00:24:51.104300 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 13 00:24:51.104547 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:24:51.104758 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 13 00:24:51.104983 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 13 00:24:51.105197 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:24:51.105416 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 13 00:24:51.105624 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 13 00:24:51.105657 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:24:51.105678 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:24:51.105698 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:24:51.105717 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:24:51.105736 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:24:51.105755 kernel: iommu: Default domain type: Translated Sep 13 00:24:51.105774 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:24:51.105793 kernel: efivars: Registered efivars operations Sep 13 00:24:51.105812 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:24:51.105836 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:24:51.105855 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 13 00:24:51.105875 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 13 00:24:51.105893 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 13 00:24:51.105912 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 13 00:24:51.105931 kernel: vgaarb: loaded Sep 13 00:24:51.105951 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:24:51.108052 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:24:51.108076 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:24:51.108101 kernel: pnp: PnP ACPI init Sep 13 00:24:51.108118 kernel: pnp: PnP ACPI: found 7 devices Sep 13 00:24:51.108134 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:24:51.108152 kernel: NET: Registered PF_INET protocol family Sep 13 00:24:51.108170 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:24:51.108188 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:24:51.108206 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:24:51.108223 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:24:51.108240 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:24:51.108264 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:24:51.108282 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.108301 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.108332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:24:51.108352 kernel: NET: Registered PF_XDP protocol family Sep 13 00:24:51.108568 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:24:51.108754 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:24:51.108932 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:24:51.109141 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 13 00:24:51.109359 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:24:51.109386 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:24:51.109406 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:24:51.109425 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 13 00:24:51.109444 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:24:51.109462 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:24:51.109480 kernel: clocksource: Switched to clocksource tsc Sep 13 00:24:51.109504 kernel: Initialise system trusted keyrings Sep 13 00:24:51.109523 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:24:51.109540 kernel: Key type asymmetric registered Sep 13 00:24:51.109559 kernel: Asymmetric key parser 'x509' registered Sep 13 00:24:51.109576 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:24:51.109594 kernel: io scheduler mq-deadline registered Sep 13 00:24:51.109613 kernel: io scheduler kyber registered Sep 13 00:24:51.109631 kernel: io scheduler bfq registered Sep 13 00:24:51.109650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:24:51.109673 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:24:51.109893 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 13 00:24:51.109922 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 13 00:24:51.112185 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 13 00:24:51.112221 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:24:51.112506 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 13 00:24:51.112535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:24:51.112555 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:24:51.112575 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:24:51.112600 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 13 00:24:51.112620 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 13 00:24:51.112823 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 13 00:24:51.112850 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:24:51.112868 kernel: i8042: Warning: Keylock active Sep 13 00:24:51.112886 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:24:51.112904 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:24:51.113147 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:24:51.113343 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:24:51.113518 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:24:50 UTC (1757723090) Sep 13 00:24:51.113691 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:24:51.113715 kernel: intel_pstate: CPU model not supported Sep 13 00:24:51.113735 kernel: pstore: Using crash dump compression: deflate Sep 13 00:24:51.113754 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:24:51.113773 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:24:51.113796 kernel: Segment Routing with IPv6 Sep 13 00:24:51.113812 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:24:51.113831 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:24:51.113848 kernel: Key type dns_resolver registered Sep 13 00:24:51.113866 kernel: IPI shorthand broadcast: enabled Sep 13 00:24:51.113886 kernel: sched_clock: Marking stable (829004884, 129884129)->(973405787, -14516774) Sep 13 00:24:51.113904 kernel: registered taskstats version 1 Sep 13 00:24:51.113923 kernel: Loading compiled-in X.509 certificates Sep 13 00:24:51.113942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:24:51.116000 kernel: Key type .fscrypt registered Sep 13 00:24:51.116034 kernel: Key type fscrypt-provisioning registered Sep 13 00:24:51.116054 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:24:51.116072 kernel: ima: No architecture policies found Sep 13 00:24:51.116091 kernel: clk: Disabling unused clocks Sep 13 00:24:51.116110 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:24:51.116128 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:24:51.116145 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:24:51.116162 kernel: Run /init as init process Sep 13 00:24:51.116186 kernel: with arguments: Sep 13 00:24:51.116205 kernel: /init Sep 13 00:24:51.116223 kernel: with environment: Sep 13 00:24:51.116241 kernel: HOME=/ Sep 13 00:24:51.116258 kernel: TERM=linux Sep 13 00:24:51.116277 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:24:51.116295 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:24:51.116326 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:24:51.116354 systemd[1]: Detected virtualization google. Sep 13 00:24:51.116375 systemd[1]: Detected architecture x86-64. Sep 13 00:24:51.116395 systemd[1]: Running in initrd. Sep 13 00:24:51.116411 systemd[1]: No hostname configured, using default hostname. Sep 13 00:24:51.116429 systemd[1]: Hostname set to . Sep 13 00:24:51.116448 systemd[1]: Initializing machine ID from random generator. Sep 13 00:24:51.116465 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:24:51.116483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:24:51.116508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:24:51.116529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:24:51.116549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:24:51.116570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:24:51.116591 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:24:51.116611 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:24:51.116634 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:24:51.116653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:24:51.116672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:24:51.116713 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:24:51.116737 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:24:51.116757 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:24:51.116777 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:24:51.116800 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:24:51.116821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:24:51.116841 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:24:51.116861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:24:51.116881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:24:51.116902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:24:51.116921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:24:51.116940 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:24:51.116991 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:24:51.117011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:24:51.117031 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:24:51.117051 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:24:51.117071 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:24:51.117091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:24:51.117112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:24:51.117133 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:24:51.117191 systemd-journald[183]: Collecting audit messages is disabled. Sep 13 00:24:51.117240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:24:51.117263 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:24:51.117292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:24:51.117324 systemd-journald[183]: Journal started Sep 13 00:24:51.117362 systemd-journald[183]: Runtime Journal (/run/log/journal/d76a189a6acd435ea71e5da9dd4d3ad2) is 8.0M, max 148.7M, 140.7M free. Sep 13 00:24:51.099353 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:24:51.122992 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:24:51.141183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:24:51.145440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:24:51.161129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:24:51.161166 kernel: Bridge firewalling registered Sep 13 00:24:51.155229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:24:51.160063 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:24:51.170621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:24:51.183286 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:24:51.183644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:24:51.194181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:24:51.198177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:24:51.212010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:24:51.217072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:24:51.225626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:24:51.239221 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:24:51.244795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:24:51.276131 dracut-cmdline[216]: dracut-dracut-053 Sep 13 00:24:51.280921 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:24:51.307215 systemd-resolved[217]: Positive Trust Anchors: Sep 13 00:24:51.307724 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:24:51.307792 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:24:51.314310 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 13 00:24:51.318007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:24:51.331682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:24:51.382996 kernel: SCSI subsystem initialized Sep 13 00:24:51.394994 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:24:51.408000 kernel: iscsi: registered transport (tcp) Sep 13 00:24:51.432017 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:24:51.432098 kernel: QLogic iSCSI HBA Driver Sep 13 00:24:51.484575 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:24:51.491187 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:24:51.530081 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:24:51.530163 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:24:51.530192 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:24:51.575010 kernel: raid6: avx2x4 gen() 18310 MB/s Sep 13 00:24:51.592002 kernel: raid6: avx2x2 gen() 18506 MB/s Sep 13 00:24:51.609378 kernel: raid6: avx2x1 gen() 14271 MB/s Sep 13 00:24:51.609428 kernel: raid6: using algorithm avx2x2 gen() 18506 MB/s Sep 13 00:24:51.627370 kernel: raid6: .... xor() 17552 MB/s, rmw enabled Sep 13 00:24:51.627409 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:24:51.650004 kernel: xor: automatically using best checksumming function avx Sep 13 00:24:51.822003 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:24:51.834912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:24:51.841233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:24:51.875760 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 13 00:24:51.883233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:24:51.891175 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:24:51.921337 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 13 00:24:51.957549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:24:51.968176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:24:52.065002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:24:52.076144 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:24:52.111364 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:24:52.117113 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:24:52.128087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:24:52.136112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:24:52.145195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:24:52.180841 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:24:52.228158 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:24:52.266770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:24:52.267780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:24:52.288654 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:24:52.288742 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:24:52.288768 kernel: AES CTR mode by8 optimization enabled Sep 13 00:24:52.295683 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:24:52.298687 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 13 00:24:52.300643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:24:52.301946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:24:52.306118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:24:52.317374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:24:52.353991 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 13 00:24:52.354364 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 13 00:24:52.357766 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 13 00:24:52.358091 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 13 00:24:52.358333 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:24:52.358809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:24:52.365242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:24:52.375105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:24:52.375144 kernel: GPT:17805311 != 25165823 Sep 13 00:24:52.375169 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:24:52.375192 kernel: GPT:17805311 != 25165823 Sep 13 00:24:52.375222 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:24:52.375247 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:52.375273 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 13 00:24:52.403666 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:24:52.432989 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (452) Sep 13 00:24:52.439980 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (469) Sep 13 00:24:52.446133 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 13 00:24:52.466066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 13 00:24:52.478757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 13 00:24:52.485437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 13 00:24:52.485648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 13 00:24:52.501357 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:24:52.527244 disk-uuid[549]: Primary Header is updated. Sep 13 00:24:52.527244 disk-uuid[549]: Secondary Entries is updated. Sep 13 00:24:52.527244 disk-uuid[549]: Secondary Header is updated. Sep 13 00:24:52.540060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:52.550007 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:53.557984 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:53.561065 disk-uuid[550]: The operation has completed successfully. Sep 13 00:24:53.634079 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:24:53.634229 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:24:53.657176 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:24:53.694062 sh[564]: Success Sep 13 00:24:53.718991 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:24:53.794124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:24:53.801103 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:24:53.828471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:24:53.872107 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:24:53.872190 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:53.872216 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:24:53.888378 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:24:53.888464 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:24:53.918992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:24:53.924191 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:24:53.925139 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:24:53.931150 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:24:53.989051 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:53.989151 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:53.989178 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:24:54.000219 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:24:54.026163 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:24:54.026210 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:24:54.044018 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:54.061600 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:24:54.089219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:24:54.150765 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:24:54.168177 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:24:54.274998 ignition[692]: Ignition 2.19.0 Sep 13 00:24:54.275015 ignition[692]: Stage: fetch-offline Sep 13 00:24:54.275731 systemd-networkd[746]: lo: Link UP Sep 13 00:24:54.275069 ignition[692]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.275739 systemd-networkd[746]: lo: Gained carrier Sep 13 00:24:54.275095 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.278238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:24:54.275250 ignition[692]: parsed url from cmdline: "" Sep 13 00:24:54.278614 systemd-networkd[746]: Enumeration completed Sep 13 00:24:54.275257 ignition[692]: no config URL provided Sep 13 00:24:54.279515 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:24:54.275267 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:24:54.279522 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:24:54.275282 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:24:54.281333 systemd-networkd[746]: eth0: Link UP Sep 13 00:24:54.275293 ignition[692]: failed to fetch config: resource requires networking Sep 13 00:24:54.281341 systemd-networkd[746]: eth0: Gained carrier Sep 13 00:24:54.275582 ignition[692]: Ignition finished successfully Sep 13 00:24:54.281354 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:24:54.370439 ignition[756]: Ignition 2.19.0 Sep 13 00:24:54.297026 systemd-networkd[746]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf.c.flatcar-212911.internal' to 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:24:54.370450 ignition[756]: Stage: fetch Sep 13 00:24:54.297041 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:24:54.370670 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.299414 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:24:54.370687 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.316333 systemd[1]: Reached target network.target - Network. Sep 13 00:24:54.370793 ignition[756]: parsed url from cmdline: "" Sep 13 00:24:54.339384 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:24:54.370797 ignition[756]: no config URL provided Sep 13 00:24:54.378586 unknown[756]: fetched base config from "system" Sep 13 00:24:54.370803 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:24:54.378598 unknown[756]: fetched base config from "system" Sep 13 00:24:54.370813 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:24:54.378609 unknown[756]: fetched user config from "gcp" Sep 13 00:24:54.370833 ignition[756]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 13 00:24:54.381775 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:24:54.374023 ignition[756]: GET result: OK Sep 13 00:24:54.416208 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:24:54.374119 ignition[756]: parsing config with SHA512: e0507c81b1156a7b5db59b2cdc2c1325ee223aaad3560ad813f5d332b8eed51175fd440d931f9f82f523eca916177fa97d61397ba5af104b3f3304862f5b1293 Sep 13 00:24:54.466383 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:24:54.379683 ignition[756]: fetch: fetch complete Sep 13 00:24:54.487284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:24:54.379693 ignition[756]: fetch: fetch passed Sep 13 00:24:54.548001 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:24:54.379764 ignition[756]: Ignition finished successfully Sep 13 00:24:54.581856 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:24:54.463980 ignition[763]: Ignition 2.19.0 Sep 13 00:24:54.603125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:24:54.463993 ignition[763]: Stage: kargs Sep 13 00:24:54.622097 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:24:54.464240 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.637103 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:24:54.464253 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.654094 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:24:54.465210 ignition[763]: kargs: kargs passed Sep 13 00:24:54.674176 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:24:54.465265 ignition[763]: Ignition finished successfully Sep 13 00:24:54.545505 ignition[769]: Ignition 2.19.0 Sep 13 00:24:54.545513 ignition[769]: Stage: disks Sep 13 00:24:54.545745 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.545758 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.546824 ignition[769]: disks: disks passed Sep 13 00:24:54.546877 ignition[769]: Ignition finished successfully Sep 13 00:24:54.720586 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:24:54.899105 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:24:54.905101 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:24:55.059328 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:24:55.060273 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:24:55.061140 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:24:55.080085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:24:55.107421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:24:55.124551 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:24:55.195143 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (785) Sep 13 00:24:55.195191 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:55.195350 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:55.195379 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:24:55.195405 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:24:55.195430 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:24:55.124636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:24:55.124681 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:24:55.187206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:24:55.205599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:24:55.236192 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:24:55.355409 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:24:55.367121 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:24:55.378097 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:24:55.389076 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:24:55.506806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:24:55.537116 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:24:55.564126 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:55.559205 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:24:55.582419 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:24:55.597915 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:24:55.616572 ignition[899]: INFO : Ignition 2.19.0 Sep 13 00:24:55.616572 ignition[899]: INFO : Stage: mount Sep 13 00:24:55.641512 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:55.641512 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:55.641512 ignition[899]: INFO : mount: mount passed Sep 13 00:24:55.641512 ignition[899]: INFO : Ignition finished successfully Sep 13 00:24:55.619094 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:24:55.637151 systemd-networkd[746]: eth0: Gained IPv6LL Sep 13 00:24:55.639109 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:24:56.070225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:24:56.103159 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (910) Sep 13 00:24:56.103200 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:56.103226 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:56.116687 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:24:56.133299 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:24:56.133363 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:24:56.136892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:24:56.170422 ignition[927]: INFO : Ignition 2.19.0 Sep 13 00:24:56.170422 ignition[927]: INFO : Stage: files Sep 13 00:24:56.185105 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:56.185105 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:56.185105 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:24:56.185105 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:24:56.185105 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:24:56.180460 unknown[927]: wrote ssh authorized keys file for user: core Sep 13 00:24:56.342710 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:24:57.502384 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:24:57.902444 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:24:58.426599 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:58.426599 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:24:58.464128 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:24:58.464128 ignition[927]: INFO : files: files passed Sep 13 00:24:58.464128 ignition[927]: INFO : Ignition finished successfully Sep 13 00:24:58.430945 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:24:58.451287 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:24:58.494182 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:24:58.517450 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:24:58.682147 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:24:58.682147 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:24:58.517576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:24:58.740168 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:24:58.566356 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:24:58.592282 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:24:58.621174 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:24:58.690708 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:24:58.690980 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:24:58.696440 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:24:58.730253 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:24:58.750323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:24:58.757300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:24:58.841146 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:24:58.864182 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:24:58.900486 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:24:58.915270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:24:58.937350 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:24:58.955259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:24:58.955471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:24:59.002179 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:24:59.002562 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:24:59.019471 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:24:59.034449 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:24:59.052472 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:24:59.071472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:24:59.089433 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:24:59.106462 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:24:59.127500 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:24:59.144455 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:24:59.161369 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:24:59.161583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:24:59.192390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:24:59.202430 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:24:59.220365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:24:59.220545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:24:59.239409 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:24:59.239602 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:24:59.278410 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:24:59.278629 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:24:59.286433 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:24:59.286602 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:24:59.354220 ignition[980]: INFO : Ignition 2.19.0 Sep 13 00:24:59.354220 ignition[980]: INFO : Stage: umount Sep 13 00:24:59.354220 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:59.354220 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:59.354220 ignition[980]: INFO : umount: umount passed Sep 13 00:24:59.354220 ignition[980]: INFO : Ignition finished successfully Sep 13 00:24:59.312306 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:24:59.362111 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:24:59.362379 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:24:59.387450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:24:59.422103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:24:59.422363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:24:59.462440 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:24:59.462617 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:24:59.495626 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:24:59.496746 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:24:59.496862 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:24:59.503741 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:24:59.503856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:24:59.523580 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:24:59.523712 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:24:59.541281 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:24:59.541342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:24:59.568198 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:24:59.568279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:24:59.586195 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:24:59.586278 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:24:59.604165 systemd[1]: Stopped target network.target - Network. Sep 13 00:24:59.619077 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:24:59.619185 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:24:59.638283 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:24:59.654175 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:24:59.660031 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:24:59.662228 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:24:59.683266 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:24:59.716167 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:24:59.716249 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:24:59.734158 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:24:59.734239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:24:59.752149 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:24:59.752247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:24:59.770208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:24:59.770308 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:24:59.788154 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:24:59.788243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:24:59.806419 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:24:59.812038 systemd-networkd[746]: eth0: DHCPv6 lease lost Sep 13 00:24:59.817458 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:24:59.845634 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:24:59.845768 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:24:59.855211 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:24:59.855453 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:24:59.873001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:24:59.873094 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:24:59.899091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:24:59.925045 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:24:59.925148 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:24:59.943189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:24:59.943269 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:24:59.961146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:24:59.961234 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:24:59.979190 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:25:00.351071 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:24:59.979287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:25:00.000380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:25:00.013445 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:25:00.013678 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:25:00.026490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:25:00.026674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:25:00.054328 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:25:00.054401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:25:00.071309 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:25:00.071393 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:25:00.109254 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:25:00.109486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:25:00.134310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:25:00.134392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:25:00.167168 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:25:00.179056 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:25:00.179153 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:25:00.190155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:25:00.190240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:25:00.201649 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:25:00.201771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:25:00.221414 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:25:00.221524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:25:00.240732 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:25:00.266182 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:25:00.302442 systemd[1]: Switching root. Sep 13 00:25:00.605077 systemd-journald[183]: Journal stopped Sep 13 00:24:51.094695 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:24:51.094742 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:24:51.094762 kernel: BIOS-provided physical RAM map: Sep 13 00:24:51.094777 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Sep 13 00:24:51.094791 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Sep 13 00:24:51.094806 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Sep 13 00:24:51.094824 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Sep 13 00:24:51.094843 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Sep 13 00:24:51.094858 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Sep 13 00:24:51.094873 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Sep 13 00:24:51.094892 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Sep 13 00:24:51.094907 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Sep 13 00:24:51.094944 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Sep 13 00:24:51.095035 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Sep 13 00:24:51.095069 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Sep 13 00:24:51.095087 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Sep 13 00:24:51.095104 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Sep 13 00:24:51.095120 kernel: NX (Execute Disable) protection: active Sep 13 00:24:51.095136 kernel: APIC: Static calls initialized Sep 13 00:24:51.095153 kernel: efi: EFI v2.7 by EDK II Sep 13 00:24:51.095170 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Sep 13 00:24:51.095187 kernel: SMBIOS 2.4 present. Sep 13 00:24:51.095204 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/14/2025 Sep 13 00:24:51.095221 kernel: Hypervisor detected: KVM Sep 13 00:24:51.095241 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:24:51.095258 kernel: kvm-clock: using sched offset of 12451718288 cycles Sep 13 00:24:51.095276 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:24:51.095293 kernel: tsc: Detected 2299.998 MHz processor Sep 13 00:24:51.095310 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:24:51.095327 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:24:51.095344 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Sep 13 00:24:51.095361 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Sep 13 00:24:51.095377 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:24:51.095398 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Sep 13 00:24:51.095414 kernel: Using GB pages for direct mapping Sep 13 00:24:51.095431 kernel: Secure boot disabled Sep 13 00:24:51.095447 kernel: ACPI: Early table checksum verification disabled Sep 13 00:24:51.095464 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Sep 13 00:24:51.095481 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Sep 13 00:24:51.095498 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Sep 13 00:24:51.095521 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Sep 13 00:24:51.095542 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Sep 13 00:24:51.095560 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Sep 13 00:24:51.095579 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Sep 13 00:24:51.095596 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Sep 13 00:24:51.095615 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Sep 13 00:24:51.095633 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Sep 13 00:24:51.095654 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Sep 13 00:24:51.095672 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Sep 13 00:24:51.095689 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Sep 13 00:24:51.095707 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Sep 13 00:24:51.095725 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Sep 13 00:24:51.095742 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Sep 13 00:24:51.095760 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Sep 13 00:24:51.095778 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Sep 13 00:24:51.095795 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Sep 13 00:24:51.095816 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Sep 13 00:24:51.095834 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:24:51.095852 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:24:51.095869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:24:51.095888 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Sep 13 00:24:51.095906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Sep 13 00:24:51.095924 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Sep 13 00:24:51.095942 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Sep 13 00:24:51.096082 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Sep 13 00:24:51.096108 kernel: Zone ranges: Sep 13 00:24:51.096127 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:24:51.096144 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:24:51.096161 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:24:51.096187 kernel: Movable zone start for each node Sep 13 00:24:51.096205 kernel: Early memory node ranges Sep 13 00:24:51.096223 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Sep 13 00:24:51.096241 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Sep 13 00:24:51.096258 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Sep 13 00:24:51.096280 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Sep 13 00:24:51.096297 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Sep 13 00:24:51.096314 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Sep 13 00:24:51.096331 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:24:51.096349 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Sep 13 00:24:51.096366 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Sep 13 00:24:51.096390 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 13 00:24:51.096408 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Sep 13 00:24:51.096430 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:24:51.096452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:24:51.096469 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:24:51.096487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:24:51.096504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:24:51.096522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:24:51.096539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:24:51.096557 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:24:51.096574 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:24:51.096598 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 13 00:24:51.096619 kernel: Booting paravirtualized kernel on KVM Sep 13 00:24:51.096637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:24:51.096655 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:24:51.096673 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:24:51.096691 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:24:51.096708 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:24:51.096725 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:24:51.096742 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:24:51.096760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:24:51.096783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:24:51.096800 kernel: random: crng init done Sep 13 00:24:51.096824 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 13 00:24:51.096842 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:24:51.096859 kernel: Fallback order for Node 0: 0 Sep 13 00:24:51.096878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Sep 13 00:24:51.096895 kernel: Policy zone: Normal Sep 13 00:24:51.096912 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:24:51.096933 kernel: software IO TLB: area num 2. Sep 13 00:24:51.096950 kernel: Memory: 7513400K/7860584K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 346924K reserved, 0K cma-reserved) Sep 13 00:24:51.096983 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:24:51.097000 kernel: Kernel/User page tables isolation: enabled Sep 13 00:24:51.097025 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:24:51.097043 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:24:51.097066 kernel: Dynamic Preempt: voluntary Sep 13 00:24:51.097083 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:24:51.097103 kernel: rcu: RCU event tracing is enabled. Sep 13 00:24:51.097139 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:24:51.097157 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:24:51.097176 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:24:51.097198 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:24:51.097217 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:24:51.097243 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:24:51.097262 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:24:51.097281 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:24:51.097301 kernel: Console: colour dummy device 80x25 Sep 13 00:24:51.097323 kernel: printk: console [ttyS0] enabled Sep 13 00:24:51.097342 kernel: ACPI: Core revision 20230628 Sep 13 00:24:51.097360 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:24:51.097379 kernel: x2apic enabled Sep 13 00:24:51.097397 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:24:51.097436 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Sep 13 00:24:51.097457 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:24:51.097476 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Sep 13 00:24:51.097499 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Sep 13 00:24:51.097518 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Sep 13 00:24:51.097537 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:24:51.097556 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:24:51.097575 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:24:51.097594 kernel: Spectre V2 : Mitigation: IBRS Sep 13 00:24:51.097613 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:24:51.097632 kernel: RETBleed: Mitigation: IBRS Sep 13 00:24:51.097651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:24:51.097673 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Sep 13 00:24:51.097692 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:24:51.097711 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:24:51.097730 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:24:51.097749 kernel: active return thunk: its_return_thunk Sep 13 00:24:51.097768 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:24:51.097787 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:24:51.097806 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:24:51.097825 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:24:51.097847 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:24:51.097866 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:24:51.097885 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:24:51.097904 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:24:51.097922 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:24:51.097947 kernel: landlock: Up and running. Sep 13 00:24:51.097978 kernel: SELinux: Initializing. Sep 13 00:24:51.097997 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.098016 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.098040 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Sep 13 00:24:51.098064 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:24:51.098083 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:24:51.098103 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:24:51.098122 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Sep 13 00:24:51.098141 kernel: signal: max sigframe size: 1776 Sep 13 00:24:51.098159 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:24:51.098179 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:24:51.098197 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:24:51.098221 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:24:51.098239 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:24:51.098258 kernel: .... node #0, CPUs: #1 Sep 13 00:24:51.098278 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:24:51.098298 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:24:51.098316 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:24:51.098335 kernel: smpboot: Max logical packages: 1 Sep 13 00:24:51.098353 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 13 00:24:51.098376 kernel: devtmpfs: initialized Sep 13 00:24:51.098395 kernel: x86/mm: Memory block size: 128MB Sep 13 00:24:51.098414 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Sep 13 00:24:51.098433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:24:51.098452 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:24:51.098471 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:24:51.098490 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:24:51.098509 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:24:51.098528 kernel: audit: type=2000 audit(1757723090.072:1): state=initialized audit_enabled=0 res=1 Sep 13 00:24:51.098550 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:24:51.098570 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:24:51.098589 kernel: cpuidle: using governor menu Sep 13 00:24:51.098608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:24:51.098626 kernel: dca service started, version 1.12.1 Sep 13 00:24:51.098645 kernel: PCI: Using configuration type 1 for base access Sep 13 00:24:51.098664 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:24:51.098683 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:24:51.098702 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:24:51.098724 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:24:51.098744 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:24:51.098762 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:24:51.098782 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:24:51.098800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:24:51.098820 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:24:51.098838 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:24:51.098857 kernel: ACPI: Interpreter enabled Sep 13 00:24:51.098876 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:24:51.098898 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:24:51.098918 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:24:51.098981 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 13 00:24:51.099022 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Sep 13 00:24:51.099045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:24:51.099344 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:24:51.099562 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:24:51.099756 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:24:51.099781 kernel: PCI host bridge to bus 0000:00 Sep 13 00:24:51.100216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:24:51.100414 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:24:51.100576 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:24:51.100733 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Sep 13 00:24:51.100890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:24:51.102770 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:24:51.103261 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Sep 13 00:24:51.103458 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:24:51.103638 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:24:51.103826 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Sep 13 00:24:51.104026 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Sep 13 00:24:51.104300 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Sep 13 00:24:51.104547 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:24:51.104758 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Sep 13 00:24:51.104983 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Sep 13 00:24:51.105197 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:24:51.105416 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Sep 13 00:24:51.105624 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Sep 13 00:24:51.105657 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:24:51.105678 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:24:51.105698 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:24:51.105717 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:24:51.105736 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:24:51.105755 kernel: iommu: Default domain type: Translated Sep 13 00:24:51.105774 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:24:51.105793 kernel: efivars: Registered efivars operations Sep 13 00:24:51.105812 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:24:51.105836 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:24:51.105855 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Sep 13 00:24:51.105875 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Sep 13 00:24:51.105893 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Sep 13 00:24:51.105912 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Sep 13 00:24:51.105931 kernel: vgaarb: loaded Sep 13 00:24:51.105951 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:24:51.108052 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:24:51.108076 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:24:51.108101 kernel: pnp: PnP ACPI init Sep 13 00:24:51.108118 kernel: pnp: PnP ACPI: found 7 devices Sep 13 00:24:51.108134 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:24:51.108152 kernel: NET: Registered PF_INET protocol family Sep 13 00:24:51.108170 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:24:51.108188 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 13 00:24:51.108206 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:24:51.108223 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:24:51.108240 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:24:51.108264 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 13 00:24:51.108282 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.108301 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 13 00:24:51.108332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:24:51.108352 kernel: NET: Registered PF_XDP protocol family Sep 13 00:24:51.108568 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:24:51.108754 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:24:51.108932 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:24:51.109141 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Sep 13 00:24:51.109359 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:24:51.109386 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:24:51.109406 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:24:51.109425 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Sep 13 00:24:51.109444 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:24:51.109462 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Sep 13 00:24:51.109480 kernel: clocksource: Switched to clocksource tsc Sep 13 00:24:51.109504 kernel: Initialise system trusted keyrings Sep 13 00:24:51.109523 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 13 00:24:51.109540 kernel: Key type asymmetric registered Sep 13 00:24:51.109559 kernel: Asymmetric key parser 'x509' registered Sep 13 00:24:51.109576 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:24:51.109594 kernel: io scheduler mq-deadline registered Sep 13 00:24:51.109613 kernel: io scheduler kyber registered Sep 13 00:24:51.109631 kernel: io scheduler bfq registered Sep 13 00:24:51.109650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:24:51.109673 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:24:51.109893 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Sep 13 00:24:51.109922 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Sep 13 00:24:51.112185 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Sep 13 00:24:51.112221 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:24:51.112506 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Sep 13 00:24:51.112535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:24:51.112555 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:24:51.112575 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:24:51.112600 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Sep 13 00:24:51.112620 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Sep 13 00:24:51.112823 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Sep 13 00:24:51.112850 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:24:51.112868 kernel: i8042: Warning: Keylock active Sep 13 00:24:51.112886 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:24:51.112904 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:24:51.113147 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:24:51.113343 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:24:51.113518 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:24:50 UTC (1757723090) Sep 13 00:24:51.113691 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:24:51.113715 kernel: intel_pstate: CPU model not supported Sep 13 00:24:51.113735 kernel: pstore: Using crash dump compression: deflate Sep 13 00:24:51.113754 kernel: pstore: Registered efi_pstore as persistent store backend Sep 13 00:24:51.113773 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:24:51.113796 kernel: Segment Routing with IPv6 Sep 13 00:24:51.113812 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:24:51.113831 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:24:51.113848 kernel: Key type dns_resolver registered Sep 13 00:24:51.113866 kernel: IPI shorthand broadcast: enabled Sep 13 00:24:51.113886 kernel: sched_clock: Marking stable (829004884, 129884129)->(973405787, -14516774) Sep 13 00:24:51.113904 kernel: registered taskstats version 1 Sep 13 00:24:51.113923 kernel: Loading compiled-in X.509 certificates Sep 13 00:24:51.113942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:24:51.116000 kernel: Key type .fscrypt registered Sep 13 00:24:51.116034 kernel: Key type fscrypt-provisioning registered Sep 13 00:24:51.116054 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:24:51.116072 kernel: ima: No architecture policies found Sep 13 00:24:51.116091 kernel: clk: Disabling unused clocks Sep 13 00:24:51.116110 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:24:51.116128 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:24:51.116145 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:24:51.116162 kernel: Run /init as init process Sep 13 00:24:51.116186 kernel: with arguments: Sep 13 00:24:51.116205 kernel: /init Sep 13 00:24:51.116223 kernel: with environment: Sep 13 00:24:51.116241 kernel: HOME=/ Sep 13 00:24:51.116258 kernel: TERM=linux Sep 13 00:24:51.116277 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:24:51.116295 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:24:51.116326 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:24:51.116354 systemd[1]: Detected virtualization google. Sep 13 00:24:51.116375 systemd[1]: Detected architecture x86-64. Sep 13 00:24:51.116395 systemd[1]: Running in initrd. Sep 13 00:24:51.116411 systemd[1]: No hostname configured, using default hostname. Sep 13 00:24:51.116429 systemd[1]: Hostname set to . Sep 13 00:24:51.116448 systemd[1]: Initializing machine ID from random generator. Sep 13 00:24:51.116465 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:24:51.116483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:24:51.116508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:24:51.116529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:24:51.116549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:24:51.116570 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:24:51.116591 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:24:51.116611 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:24:51.116634 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:24:51.116653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:24:51.116672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:24:51.116713 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:24:51.116737 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:24:51.116757 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:24:51.116777 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:24:51.116800 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:24:51.116821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:24:51.116841 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:24:51.116861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:24:51.116881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:24:51.116902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:24:51.116921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:24:51.116940 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:24:51.116991 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:24:51.117011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:24:51.117031 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:24:51.117051 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:24:51.117071 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:24:51.117091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:24:51.117112 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:24:51.117133 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:24:51.117191 systemd-journald[183]: Collecting audit messages is disabled. Sep 13 00:24:51.117240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:24:51.117263 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:24:51.117292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:24:51.117324 systemd-journald[183]: Journal started Sep 13 00:24:51.117362 systemd-journald[183]: Runtime Journal (/run/log/journal/d76a189a6acd435ea71e5da9dd4d3ad2) is 8.0M, max 148.7M, 140.7M free. Sep 13 00:24:51.099353 systemd-modules-load[184]: Inserted module 'overlay' Sep 13 00:24:51.122992 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:24:51.141183 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:24:51.145440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:24:51.161129 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:24:51.161166 kernel: Bridge firewalling registered Sep 13 00:24:51.155229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:24:51.160063 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 13 00:24:51.170621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:24:51.183286 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:24:51.183644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:24:51.194181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:24:51.198177 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:24:51.212010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:24:51.217072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:24:51.225626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:24:51.239221 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:24:51.244795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:24:51.276131 dracut-cmdline[216]: dracut-dracut-053 Sep 13 00:24:51.280921 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:24:51.307215 systemd-resolved[217]: Positive Trust Anchors: Sep 13 00:24:51.307724 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:24:51.307792 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:24:51.314310 systemd-resolved[217]: Defaulting to hostname 'linux'. Sep 13 00:24:51.318007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:24:51.331682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:24:51.382996 kernel: SCSI subsystem initialized Sep 13 00:24:51.394994 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:24:51.408000 kernel: iscsi: registered transport (tcp) Sep 13 00:24:51.432017 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:24:51.432098 kernel: QLogic iSCSI HBA Driver Sep 13 00:24:51.484575 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:24:51.491187 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:24:51.530081 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:24:51.530163 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:24:51.530192 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:24:51.575010 kernel: raid6: avx2x4 gen() 18310 MB/s Sep 13 00:24:51.592002 kernel: raid6: avx2x2 gen() 18506 MB/s Sep 13 00:24:51.609378 kernel: raid6: avx2x1 gen() 14271 MB/s Sep 13 00:24:51.609428 kernel: raid6: using algorithm avx2x2 gen() 18506 MB/s Sep 13 00:24:51.627370 kernel: raid6: .... xor() 17552 MB/s, rmw enabled Sep 13 00:24:51.627409 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:24:51.650004 kernel: xor: automatically using best checksumming function avx Sep 13 00:24:51.822003 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:24:51.834912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:24:51.841233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:24:51.875760 systemd-udevd[400]: Using default interface naming scheme 'v255'. Sep 13 00:24:51.883233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:24:51.891175 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:24:51.921337 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Sep 13 00:24:51.957549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:24:51.968176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:24:52.065002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:24:52.076144 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:24:52.111364 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:24:52.117113 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:24:52.128087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:24:52.136112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:24:52.145195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:24:52.180841 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:24:52.228158 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:24:52.266770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:24:52.267780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:24:52.288654 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:24:52.288742 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:24:52.288768 kernel: AES CTR mode by8 optimization enabled Sep 13 00:24:52.295683 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:24:52.298687 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Sep 13 00:24:52.300643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:24:52.301946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:24:52.306118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:24:52.317374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:24:52.353991 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Sep 13 00:24:52.354364 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Sep 13 00:24:52.357766 kernel: sd 0:0:1:0: [sda] Write Protect is off Sep 13 00:24:52.358091 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Sep 13 00:24:52.358333 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:24:52.358809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:24:52.365242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:24:52.375105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:24:52.375144 kernel: GPT:17805311 != 25165823 Sep 13 00:24:52.375169 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:24:52.375192 kernel: GPT:17805311 != 25165823 Sep 13 00:24:52.375222 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:24:52.375247 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:52.375273 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Sep 13 00:24:52.403666 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:24:52.432989 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (452) Sep 13 00:24:52.439980 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (469) Sep 13 00:24:52.446133 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Sep 13 00:24:52.466066 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Sep 13 00:24:52.478757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 13 00:24:52.485437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Sep 13 00:24:52.485648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Sep 13 00:24:52.501357 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:24:52.527244 disk-uuid[549]: Primary Header is updated. Sep 13 00:24:52.527244 disk-uuid[549]: Secondary Entries is updated. Sep 13 00:24:52.527244 disk-uuid[549]: Secondary Header is updated. Sep 13 00:24:52.540060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:52.550007 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:53.557984 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:24:53.561065 disk-uuid[550]: The operation has completed successfully. Sep 13 00:24:53.634079 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:24:53.634229 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:24:53.657176 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:24:53.694062 sh[564]: Success Sep 13 00:24:53.718991 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:24:53.794124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:24:53.801103 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:24:53.828471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:24:53.872107 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:24:53.872190 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:53.872216 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:24:53.888378 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:24:53.888464 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:24:53.918992 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:24:53.924191 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:24:53.925139 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:24:53.931150 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:24:53.989051 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:53.989151 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:53.989178 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:24:54.000219 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:24:54.026163 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:24:54.026210 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:24:54.044018 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:54.061600 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:24:54.089219 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:24:54.150765 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:24:54.168177 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:24:54.274998 ignition[692]: Ignition 2.19.0 Sep 13 00:24:54.275015 ignition[692]: Stage: fetch-offline Sep 13 00:24:54.275731 systemd-networkd[746]: lo: Link UP Sep 13 00:24:54.275069 ignition[692]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.275739 systemd-networkd[746]: lo: Gained carrier Sep 13 00:24:54.275095 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.278238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:24:54.275250 ignition[692]: parsed url from cmdline: "" Sep 13 00:24:54.278614 systemd-networkd[746]: Enumeration completed Sep 13 00:24:54.275257 ignition[692]: no config URL provided Sep 13 00:24:54.279515 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:24:54.275267 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:24:54.279522 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:24:54.275282 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:24:54.281333 systemd-networkd[746]: eth0: Link UP Sep 13 00:24:54.275293 ignition[692]: failed to fetch config: resource requires networking Sep 13 00:24:54.281341 systemd-networkd[746]: eth0: Gained carrier Sep 13 00:24:54.275582 ignition[692]: Ignition finished successfully Sep 13 00:24:54.281354 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:24:54.370439 ignition[756]: Ignition 2.19.0 Sep 13 00:24:54.297026 systemd-networkd[746]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf.c.flatcar-212911.internal' to 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:24:54.370450 ignition[756]: Stage: fetch Sep 13 00:24:54.297041 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:24:54.370670 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.299414 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:24:54.370687 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.316333 systemd[1]: Reached target network.target - Network. Sep 13 00:24:54.370793 ignition[756]: parsed url from cmdline: "" Sep 13 00:24:54.339384 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:24:54.370797 ignition[756]: no config URL provided Sep 13 00:24:54.378586 unknown[756]: fetched base config from "system" Sep 13 00:24:54.370803 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:24:54.378598 unknown[756]: fetched base config from "system" Sep 13 00:24:54.370813 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:24:54.378609 unknown[756]: fetched user config from "gcp" Sep 13 00:24:54.370833 ignition[756]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Sep 13 00:24:54.381775 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:24:54.374023 ignition[756]: GET result: OK Sep 13 00:24:54.416208 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:24:54.374119 ignition[756]: parsing config with SHA512: e0507c81b1156a7b5db59b2cdc2c1325ee223aaad3560ad813f5d332b8eed51175fd440d931f9f82f523eca916177fa97d61397ba5af104b3f3304862f5b1293 Sep 13 00:24:54.466383 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:24:54.379683 ignition[756]: fetch: fetch complete Sep 13 00:24:54.487284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:24:54.379693 ignition[756]: fetch: fetch passed Sep 13 00:24:54.548001 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:24:54.379764 ignition[756]: Ignition finished successfully Sep 13 00:24:54.581856 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:24:54.463980 ignition[763]: Ignition 2.19.0 Sep 13 00:24:54.603125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:24:54.463993 ignition[763]: Stage: kargs Sep 13 00:24:54.622097 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:24:54.464240 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.637103 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:24:54.464253 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.654094 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:24:54.465210 ignition[763]: kargs: kargs passed Sep 13 00:24:54.674176 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:24:54.465265 ignition[763]: Ignition finished successfully Sep 13 00:24:54.545505 ignition[769]: Ignition 2.19.0 Sep 13 00:24:54.545513 ignition[769]: Stage: disks Sep 13 00:24:54.545745 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:54.545758 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:54.546824 ignition[769]: disks: disks passed Sep 13 00:24:54.546877 ignition[769]: Ignition finished successfully Sep 13 00:24:54.720586 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:24:54.899105 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:24:54.905101 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:24:55.059328 kernel: EXT4-fs (sda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:24:55.060273 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:24:55.061140 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:24:55.080085 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:24:55.107421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:24:55.124551 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:24:55.195143 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (785) Sep 13 00:24:55.195191 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:55.195350 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:55.195379 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:24:55.195405 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:24:55.195430 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:24:55.124636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:24:55.124681 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:24:55.187206 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:24:55.205599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:24:55.236192 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:24:55.355409 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:24:55.367121 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:24:55.378097 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:24:55.389076 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:24:55.506806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:24:55.537116 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:24:55.564126 kernel: BTRFS info (device sda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:55.559205 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:24:55.582419 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:24:55.597915 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:24:55.616572 ignition[899]: INFO : Ignition 2.19.0 Sep 13 00:24:55.616572 ignition[899]: INFO : Stage: mount Sep 13 00:24:55.641512 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:55.641512 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:55.641512 ignition[899]: INFO : mount: mount passed Sep 13 00:24:55.641512 ignition[899]: INFO : Ignition finished successfully Sep 13 00:24:55.619094 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:24:55.637151 systemd-networkd[746]: eth0: Gained IPv6LL Sep 13 00:24:55.639109 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:24:56.070225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:24:56.103159 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (910) Sep 13 00:24:56.103200 kernel: BTRFS info (device sda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:24:56.103226 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:24:56.116687 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:24:56.133299 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:24:56.133363 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:24:56.136892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:24:56.170422 ignition[927]: INFO : Ignition 2.19.0 Sep 13 00:24:56.170422 ignition[927]: INFO : Stage: files Sep 13 00:24:56.185105 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:56.185105 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:56.185105 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:24:56.185105 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:24:56.185105 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:24:56.185105 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:24:56.180460 unknown[927]: wrote ssh authorized keys file for user: core Sep 13 00:24:56.342710 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:24:57.502384 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:57.520146 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:24:57.902444 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:24:58.426599 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:24:58.426599 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:24:58.464128 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:24:58.464128 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:24:58.464128 ignition[927]: INFO : files: files passed Sep 13 00:24:58.464128 ignition[927]: INFO : Ignition finished successfully Sep 13 00:24:58.430945 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:24:58.451287 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:24:58.494182 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:24:58.517450 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:24:58.682147 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:24:58.682147 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:24:58.517576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:24:58.740168 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:24:58.566356 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:24:58.592282 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:24:58.621174 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:24:58.690708 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:24:58.690980 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:24:58.696440 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:24:58.730253 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:24:58.750323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:24:58.757300 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:24:58.841146 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:24:58.864182 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:24:58.900486 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:24:58.915270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:24:58.937350 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:24:58.955259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:24:58.955471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:24:59.002179 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:24:59.002562 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:24:59.019471 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:24:59.034449 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:24:59.052472 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:24:59.071472 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:24:59.089433 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:24:59.106462 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:24:59.127500 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:24:59.144455 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:24:59.161369 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:24:59.161583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:24:59.192390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:24:59.202430 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:24:59.220365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:24:59.220545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:24:59.239409 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:24:59.239602 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:24:59.278410 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:24:59.278629 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:24:59.286433 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:24:59.286602 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:24:59.354220 ignition[980]: INFO : Ignition 2.19.0 Sep 13 00:24:59.354220 ignition[980]: INFO : Stage: umount Sep 13 00:24:59.354220 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:24:59.354220 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Sep 13 00:24:59.354220 ignition[980]: INFO : umount: umount passed Sep 13 00:24:59.354220 ignition[980]: INFO : Ignition finished successfully Sep 13 00:24:59.312306 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:24:59.362111 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:24:59.362379 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:24:59.387450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:24:59.422103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:24:59.422363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:24:59.462440 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:24:59.462617 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:24:59.495626 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:24:59.496746 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:24:59.496862 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:24:59.503741 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:24:59.503856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:24:59.523580 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:24:59.523712 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:24:59.541281 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:24:59.541342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:24:59.568198 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:24:59.568279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:24:59.586195 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:24:59.586278 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:24:59.604165 systemd[1]: Stopped target network.target - Network. Sep 13 00:24:59.619077 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:24:59.619185 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:24:59.638283 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:24:59.654175 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:24:59.660031 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:24:59.662228 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:24:59.683266 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:24:59.716167 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:24:59.716249 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:24:59.734158 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:24:59.734239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:24:59.752149 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:24:59.752247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:24:59.770208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:24:59.770308 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:24:59.788154 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:24:59.788243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:24:59.806419 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:24:59.812038 systemd-networkd[746]: eth0: DHCPv6 lease lost Sep 13 00:24:59.817458 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:24:59.845634 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:24:59.845768 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:24:59.855211 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:24:59.855453 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:24:59.873001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:24:59.873094 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:24:59.899091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:24:59.925045 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:24:59.925148 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:24:59.943189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:24:59.943269 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:24:59.961146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:24:59.961234 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:24:59.979190 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:25:00.351071 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 13 00:24:59.979287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:25:00.000380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:25:00.013445 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:25:00.013678 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:25:00.026490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:25:00.026674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:25:00.054328 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:25:00.054401 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:25:00.071309 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:25:00.071393 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:25:00.109254 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:25:00.109486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:25:00.134310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:25:00.134392 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:25:00.167168 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:25:00.179056 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:25:00.179153 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:25:00.190155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:25:00.190240 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:25:00.201649 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:25:00.201771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:25:00.221414 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:25:00.221524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:25:00.240732 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:25:00.266182 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:25:00.302442 systemd[1]: Switching root. Sep 13 00:25:00.605077 systemd-journald[183]: Journal stopped Sep 13 00:25:03.056592 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:25:03.056647 kernel: SELinux: policy capability open_perms=1 Sep 13 00:25:03.056670 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:25:03.056688 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:25:03.056706 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:25:03.056726 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:25:03.056748 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:25:03.056773 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:25:03.056793 kernel: audit: type=1403 audit(1757723100.960:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:25:03.056817 systemd[1]: Successfully loaded SELinux policy in 90.816ms. Sep 13 00:25:03.056840 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.918ms. Sep 13 00:25:03.056863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:25:03.056884 systemd[1]: Detected virtualization google. Sep 13 00:25:03.056905 systemd[1]: Detected architecture x86-64. Sep 13 00:25:03.056931 systemd[1]: Detected first boot. Sep 13 00:25:03.056953 systemd[1]: Initializing machine ID from random generator. Sep 13 00:25:03.057009 zram_generator::config[1021]: No configuration found. Sep 13 00:25:03.057031 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:25:03.057052 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:25:03.057078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:25:03.057099 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:25:03.057122 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:25:03.057142 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:25:03.057162 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:25:03.057185 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:25:03.057206 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:25:03.057236 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:25:03.057258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:25:03.057279 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:25:03.057300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:25:03.057322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:25:03.057344 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:25:03.057374 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:25:03.057396 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:25:03.057423 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:25:03.057445 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:25:03.057468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:25:03.057490 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:25:03.057512 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:25:03.057535 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:25:03.057563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:25:03.057586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:25:03.057610 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:25:03.057636 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:25:03.057659 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:25:03.057682 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:25:03.057703 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:25:03.057728 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:25:03.057751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:25:03.057774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:25:03.057803 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:25:03.057824 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:25:03.057846 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:25:03.057869 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:25:03.057892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:03.057920 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:25:03.057944 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:25:03.057984 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:25:03.058009 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:25:03.058032 systemd[1]: Reached target machines.target - Containers. Sep 13 00:25:03.058055 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:25:03.058077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:25:03.058101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:25:03.058129 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:25:03.058152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:25:03.058174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:25:03.058196 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:25:03.058219 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:25:03.058242 kernel: fuse: init (API version 7.39) Sep 13 00:25:03.058264 kernel: ACPI: bus type drm_connector registered Sep 13 00:25:03.058285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:25:03.058310 kernel: loop: module loaded Sep 13 00:25:03.058329 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:25:03.058350 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:25:03.058381 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:25:03.058402 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:25:03.058423 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:25:03.058446 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:25:03.058469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:25:03.058525 systemd-journald[1108]: Collecting audit messages is disabled. Sep 13 00:25:03.058576 systemd-journald[1108]: Journal started Sep 13 00:25:03.058621 systemd-journald[1108]: Runtime Journal (/run/log/journal/d131d82bf4e641fb87bad28dbd78a3f7) is 8.0M, max 148.7M, 140.7M free. Sep 13 00:25:03.071155 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:25:01.818742 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:25:01.847953 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:25:01.848544 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:25:03.097997 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:25:03.137660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:25:03.137731 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:25:03.137751 systemd[1]: Stopped verity-setup.service. Sep 13 00:25:03.167995 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:03.178014 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:25:03.189487 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:25:03.199302 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:25:03.209321 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:25:03.219296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:25:03.229304 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:25:03.239285 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:25:03.250618 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:25:03.262600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:25:03.274601 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:25:03.275037 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:25:03.286546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:25:03.286779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:25:03.298459 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:25:03.298696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:25:03.308432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:25:03.308671 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:25:03.320436 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:25:03.320676 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:25:03.331453 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:25:03.331688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:25:03.341431 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:25:03.351553 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:25:03.363459 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:25:03.375514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:25:03.400598 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:25:03.419122 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:25:03.430428 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:25:03.440120 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:25:03.440328 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:25:03.452051 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:25:03.469190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:25:03.492263 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:25:03.503257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:25:03.512217 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:25:03.539321 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:25:03.551159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:25:03.561219 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:25:03.572160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:25:03.577314 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:25:03.584127 systemd-journald[1108]: Time spent on flushing to /var/log/journal/d131d82bf4e641fb87bad28dbd78a3f7 is 139.023ms for 927 entries. Sep 13 00:25:03.584127 systemd-journald[1108]: System Journal (/var/log/journal/d131d82bf4e641fb87bad28dbd78a3f7) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:25:03.753032 systemd-journald[1108]: Received client request to flush runtime journal. Sep 13 00:25:03.753102 kernel: loop0: detected capacity change from 0 to 142488 Sep 13 00:25:03.602178 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:25:03.620201 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:25:03.642123 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:25:03.657471 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:25:03.669308 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:25:03.686487 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:25:03.698570 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:25:03.710869 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:25:03.726532 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:25:03.746280 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:25:03.758804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:25:03.781196 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:25:03.795990 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:25:03.810808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:25:03.821659 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:25:03.822754 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:25:03.840998 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:25:03.843707 udevadm[1142]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:25:03.912747 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Sep 13 00:25:03.918026 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Sep 13 00:25:03.943144 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:25:03.973313 kernel: loop2: detected capacity change from 0 to 54824 Sep 13 00:25:04.052040 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 00:25:04.156997 kernel: loop4: detected capacity change from 0 to 142488 Sep 13 00:25:04.210086 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:25:04.250016 kernel: loop6: detected capacity change from 0 to 54824 Sep 13 00:25:04.288354 kernel: loop7: detected capacity change from 0 to 140768 Sep 13 00:25:04.333670 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Sep 13 00:25:04.335630 (sd-merge)[1164]: Merged extensions into '/usr'. Sep 13 00:25:04.345431 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:25:04.345452 systemd[1]: Reloading... Sep 13 00:25:04.509010 zram_generator::config[1186]: No configuration found. Sep 13 00:25:04.588441 ldconfig[1134]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:25:04.764233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:25:04.868506 systemd[1]: Reloading finished in 521 ms. Sep 13 00:25:04.900055 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:25:04.910730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:25:04.935206 systemd[1]: Starting ensure-sysext.service... Sep 13 00:25:04.951200 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:25:04.973094 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:25:04.973126 systemd[1]: Reloading... Sep 13 00:25:05.001843 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:25:05.003194 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:25:05.005729 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:25:05.006454 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Sep 13 00:25:05.007238 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Sep 13 00:25:05.016926 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:25:05.016949 systemd-tmpfiles[1231]: Skipping /boot Sep 13 00:25:05.038713 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:25:05.038734 systemd-tmpfiles[1231]: Skipping /boot Sep 13 00:25:05.107999 zram_generator::config[1257]: No configuration found. Sep 13 00:25:05.243693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:25:05.308518 systemd[1]: Reloading finished in 334 ms. Sep 13 00:25:05.333814 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:25:05.349573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:25:05.375239 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:25:05.393249 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:25:05.409491 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:25:05.432225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:25:05.450170 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:25:05.455504 augenrules[1318]: No rules Sep 13 00:25:05.469157 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:25:05.482971 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:25:05.493599 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:25:05.515454 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Sep 13 00:25:05.524243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:05.524662 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:25:05.534351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:25:05.550312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:25:05.568859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:25:05.579257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:25:05.587466 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:25:05.604353 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:25:05.614059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:05.621056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:25:05.634143 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:25:05.646814 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:25:05.658793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:25:05.659028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:25:05.670803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:25:05.671793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:25:05.684760 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:25:05.685045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:25:05.695838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:25:05.710709 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:25:05.761301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:05.761630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:25:05.771320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:25:05.790399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:25:05.807354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:25:05.816024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:25:05.827326 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:25:05.837106 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:25:05.837314 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:05.841046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:25:05.841238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:25:05.853228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:25:05.854278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:25:05.866880 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:25:05.869279 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:25:05.881364 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:25:05.909531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:05.910477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:25:05.917141 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1336) Sep 13 00:25:05.916264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:25:05.937176 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:25:05.956229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:25:05.976207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:25:05.991197 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 13 00:25:05.999231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:25:05.999355 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:25:06.009134 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:25:06.009175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:25:06.010226 systemd[1]: Finished ensure-sysext.service. Sep 13 00:25:06.019758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:25:06.020020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:25:06.030954 systemd-resolved[1316]: Positive Trust Anchors: Sep 13 00:25:06.031568 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:25:06.031779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:25:06.032359 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:25:06.032426 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:25:06.041680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:25:06.041929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:25:06.051662 systemd-resolved[1316]: Defaulting to hostname 'linux'. Sep 13 00:25:06.053644 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:25:06.054813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:25:06.055813 systemd-networkd[1371]: lo: Link UP Sep 13 00:25:06.055820 systemd-networkd[1371]: lo: Gained carrier Sep 13 00:25:06.058636 systemd-networkd[1371]: Enumeration completed Sep 13 00:25:06.062596 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:25:06.063127 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:25:06.065288 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:25:06.066891 systemd-networkd[1371]: eth0: Link UP Sep 13 00:25:06.067041 systemd-networkd[1371]: eth0: Gained carrier Sep 13 00:25:06.067149 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:25:06.075331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:25:06.079081 systemd-networkd[1371]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf.c.flatcar-212911.internal' to 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:25:06.079119 systemd-networkd[1371]: eth0: DHCPv4 address 10.128.0.49/32, gateway 10.128.0.1 acquired from 169.254.169.254 Sep 13 00:25:06.115592 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:25:06.132985 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:25:06.134480 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 13 00:25:06.161268 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:25:06.161663 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 13 00:25:06.161240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Sep 13 00:25:06.176180 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:25:06.198390 systemd[1]: Reached target network.target - Network. Sep 13 00:25:06.214998 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Sep 13 00:25:06.218555 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:25:06.238234 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Sep 13 00:25:06.254048 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:25:06.264487 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:25:06.268366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:25:06.286182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:25:06.297110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:25:06.297217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:25:06.323215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:25:06.338320 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:25:06.350664 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:25:06.351231 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Sep 13 00:25:06.362193 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:25:06.380210 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:25:06.416139 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:25:06.416591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:25:06.422224 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:25:06.434626 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:25:06.456290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:25:06.467342 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:25:06.477262 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:25:06.488138 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:25:06.499290 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:25:06.509201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:25:06.520088 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:25:06.531080 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:25:06.531149 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:25:06.539056 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:25:06.549022 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:25:06.560735 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:25:06.574396 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:25:06.585026 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:25:06.596301 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:25:06.606891 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:25:06.617089 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:25:06.626154 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:25:06.626203 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:25:06.635094 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:25:06.657197 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:25:06.684502 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:25:06.701174 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:25:06.719211 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:25:06.729096 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:25:06.733737 jq[1429]: false Sep 13 00:25:06.735190 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:25:06.745862 coreos-metadata[1425]: Sep 13 00:25:06.745 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Sep 13 00:25:06.747701 coreos-metadata[1425]: Sep 13 00:25:06.747 INFO Fetch successful Sep 13 00:25:06.747701 coreos-metadata[1425]: Sep 13 00:25:06.747 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Sep 13 00:25:06.748146 coreos-metadata[1425]: Sep 13 00:25:06.748 INFO Fetch successful Sep 13 00:25:06.750067 coreos-metadata[1425]: Sep 13 00:25:06.749 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Sep 13 00:25:06.750067 coreos-metadata[1425]: Sep 13 00:25:06.749 INFO Fetch successful Sep 13 00:25:06.750067 coreos-metadata[1425]: Sep 13 00:25:06.749 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Sep 13 00:25:06.750067 coreos-metadata[1425]: Sep 13 00:25:06.749 INFO Fetch successful Sep 13 00:25:06.752241 systemd[1]: Started ntpd.service - Network Time Service. Sep 13 00:25:06.769169 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:25:06.787195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:25:06.806722 extend-filesystems[1430]: Found loop4 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found loop5 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found loop6 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found loop7 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda1 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda2 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda3 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found usr Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda4 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda6 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda7 Sep 13 00:25:06.806722 extend-filesystems[1430]: Found sda9 Sep 13 00:25:06.806722 extend-filesystems[1430]: Checking size of /dev/sda9 Sep 13 00:25:06.984147 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Sep 13 00:25:06.984198 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Sep 13 00:25:06.984233 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1336) Sep 13 00:25:06.806221 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:25:06.984428 extend-filesystems[1430]: Resized partition /dev/sda9 Sep 13 00:25:06.818016 dbus-daemon[1426]: [system] SELinux support is enabled Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: ---------------------------------------------------- Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: corporation. Support and training for ntp-4 are Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: available at https://www.nwtime.org/support Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: ---------------------------------------------------- Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: proto: precision = 0.082 usec (-23) Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: basedate set to 2025-08-31 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: gps base set to 2025-08-31 (week 2382) Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Listen normally on 3 eth0 10.128.0.49:123 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Listen normally on 4 lo [::1]:123 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:31%2#123 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:25:06.993505 ntpd[1433]: 13 Sep 00:25:06 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:25:06.829301 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:25:06.994864 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:25:06.994864 extend-filesystems[1450]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:25:06.994864 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 2 Sep 13 00:25:06.994864 extend-filesystems[1450]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Sep 13 00:25:06.827199 dbus-daemon[1426]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1371 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:25:06.869714 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Sep 13 00:25:07.080722 extend-filesystems[1430]: Resized filesystem in /dev/sda9 Sep 13 00:25:06.833365 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 21:58:26 UTC 2025 (1): Starting Sep 13 00:25:06.871986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:25:06.833397 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 13 00:25:06.877784 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:25:07.094148 update_engine[1453]: I20250913 00:25:06.993147 1453 main.cc:92] Flatcar Update Engine starting Sep 13 00:25:07.094148 update_engine[1453]: I20250913 00:25:06.999212 1453 update_check_scheduler.cc:74] Next update check in 7m50s Sep 13 00:25:06.833413 ntpd[1433]: ---------------------------------------------------- Sep 13 00:25:06.901086 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:25:07.094736 jq[1457]: true Sep 13 00:25:06.833429 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Sep 13 00:25:06.924798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:25:06.833443 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 13 00:25:06.966563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:25:06.833458 ntpd[1433]: corporation. Support and training for ntp-4 are Sep 13 00:25:06.966847 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:25:06.833476 ntpd[1433]: available at https://www.nwtime.org/support Sep 13 00:25:06.967493 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:25:06.833492 ntpd[1433]: ---------------------------------------------------- Sep 13 00:25:06.967758 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:25:06.838196 ntpd[1433]: proto: precision = 0.082 usec (-23) Sep 13 00:25:06.998887 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:25:06.839660 ntpd[1433]: basedate set to 2025-08-31 Sep 13 00:25:06.999165 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:25:06.839684 ntpd[1433]: gps base set to 2025-08-31 (week 2382) Sep 13 00:25:07.004339 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:25:06.844238 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Sep 13 00:25:07.005079 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:25:06.844299 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 13 00:25:07.040212 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:25:06.844543 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Sep 13 00:25:07.040242 systemd-logind[1446]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:25:06.844604 ntpd[1433]: Listen normally on 3 eth0 10.128.0.49:123 Sep 13 00:25:07.040272 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:25:06.844666 ntpd[1433]: Listen normally on 4 lo [::1]:123 Sep 13 00:25:07.040592 systemd-logind[1446]: New seat seat0. Sep 13 00:25:06.844729 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:25:07.045459 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:25:06.844757 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:31%2#123 Sep 13 00:25:07.102066 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:25:06.844778 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 13 00:25:06.844818 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Sep 13 00:25:06.848708 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:25:06.848745 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 13 00:25:07.111362 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:25:07.116686 jq[1463]: true Sep 13 00:25:07.118075 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:25:07.185610 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:25:07.189994 tar[1462]: linux-amd64/helm Sep 13 00:25:07.204235 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:25:07.216317 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:25:07.216580 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:25:07.216803 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:25:07.242266 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 00:25:07.252186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:25:07.252441 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:25:07.272631 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:25:07.277086 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:25:07.289042 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:25:07.315351 systemd[1]: Starting sshkeys.service... Sep 13 00:25:07.374659 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:25:07.399194 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:25:07.541617 coreos-metadata[1499]: Sep 13 00:25:07.541 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Sep 13 00:25:07.543682 coreos-metadata[1499]: Sep 13 00:25:07.542 INFO Fetch failed with 404: resource not found Sep 13 00:25:07.543682 coreos-metadata[1499]: Sep 13 00:25:07.542 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Sep 13 00:25:07.544727 coreos-metadata[1499]: Sep 13 00:25:07.544 INFO Fetch successful Sep 13 00:25:07.544727 coreos-metadata[1499]: Sep 13 00:25:07.544 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Sep 13 00:25:07.560091 coreos-metadata[1499]: Sep 13 00:25:07.553 INFO Fetch failed with 404: resource not found Sep 13 00:25:07.560091 coreos-metadata[1499]: Sep 13 00:25:07.553 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Sep 13 00:25:07.560551 coreos-metadata[1499]: Sep 13 00:25:07.560 INFO Fetch failed with 404: resource not found Sep 13 00:25:07.560551 coreos-metadata[1499]: Sep 13 00:25:07.560 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Sep 13 00:25:07.561685 coreos-metadata[1499]: Sep 13 00:25:07.561 INFO Fetch successful Sep 13 00:25:07.568052 unknown[1499]: wrote ssh authorized keys file for user: core Sep 13 00:25:07.635068 update-ssh-keys[1506]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:25:07.638067 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:25:07.644192 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:25:07.644738 dbus-daemon[1426]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1495 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:25:07.650096 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 00:25:07.660933 systemd[1]: Finished sshkeys.service. Sep 13 00:25:07.685796 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 00:25:07.752228 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:25:07.781134 polkitd[1513]: Started polkitd version 121 Sep 13 00:25:07.785786 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:25:07.793845 polkitd[1513]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:25:07.795550 polkitd[1513]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:25:07.798557 polkitd[1513]: Finished loading, compiling and executing 2 rules Sep 13 00:25:07.799549 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 00:25:07.799212 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:25:07.800395 polkitd[1513]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:25:07.835761 ntpd[1433]: bind(24) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:25:07.837396 ntpd[1433]: 13 Sep 00:25:07 ntpd[1433]: bind(24) AF_INET6 fe80::4001:aff:fe80:31%2#123 flags 0x11 failed: Cannot assign requested address Sep 13 00:25:07.837396 ntpd[1433]: 13 Sep 00:25:07 ntpd[1433]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:31%2#123 Sep 13 00:25:07.837396 ntpd[1433]: 13 Sep 00:25:07 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 13 00:25:07.835826 ntpd[1433]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:31%2#123 Sep 13 00:25:07.835849 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:31%2 Sep 13 00:25:07.847630 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:25:07.866156 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:25:07.869383 systemd-hostnamed[1495]: Hostname set to (transient) Sep 13 00:25:07.870152 systemd-resolved[1316]: System hostname changed to 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf'. Sep 13 00:25:07.881371 systemd[1]: Started sshd@0-10.128.0.49:22-147.75.109.163:54712.service - OpenSSH per-connection server daemon (147.75.109.163:54712). Sep 13 00:25:07.913807 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:25:07.914151 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:25:07.915268 containerd[1464]: time="2025-09-13T00:25:07.915157246Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:25:07.940359 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:25:08.007070 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:25:08.022675 containerd[1464]: time="2025-09-13T00:25:08.022571974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.025361 containerd[1464]: time="2025-09-13T00:25:08.025315357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:25:08.025527 containerd[1464]: time="2025-09-13T00:25:08.025490157Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:25:08.025648 containerd[1464]: time="2025-09-13T00:25:08.025624814Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:25:08.025989 containerd[1464]: time="2025-09-13T00:25:08.025941963Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:25:08.026143 containerd[1464]: time="2025-09-13T00:25:08.026116528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.026348 containerd[1464]: time="2025-09-13T00:25:08.026317267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:25:08.026461 containerd[1464]: time="2025-09-13T00:25:08.026429861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.026852 containerd[1464]: time="2025-09-13T00:25:08.026816955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:25:08.026978 containerd[1464]: time="2025-09-13T00:25:08.026942256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.027130 containerd[1464]: time="2025-09-13T00:25:08.027101430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:25:08.027725 containerd[1464]: time="2025-09-13T00:25:08.027198342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.027725 containerd[1464]: time="2025-09-13T00:25:08.027340781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.027725 containerd[1464]: time="2025-09-13T00:25:08.027669635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:25:08.028452 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:25:08.029035 containerd[1464]: time="2025-09-13T00:25:08.028889352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:25:08.029035 containerd[1464]: time="2025-09-13T00:25:08.028925396Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:25:08.029813 containerd[1464]: time="2025-09-13T00:25:08.029524059Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:25:08.029813 containerd[1464]: time="2025-09-13T00:25:08.029615144Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:25:08.034885 containerd[1464]: time="2025-09-13T00:25:08.034720561Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:25:08.034885 containerd[1464]: time="2025-09-13T00:25:08.034806508Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:25:08.035362 containerd[1464]: time="2025-09-13T00:25:08.034833251Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:25:08.035362 containerd[1464]: time="2025-09-13T00:25:08.035021387Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:25:08.035362 containerd[1464]: time="2025-09-13T00:25:08.035048010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:25:08.035362 containerd[1464]: time="2025-09-13T00:25:08.035216806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036165167Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036361688Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036390695Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036414257Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036452122Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036476399Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036498873Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036522442Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036546411Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036571031Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036593037Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036615001Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036646253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.036982 containerd[1464]: time="2025-09-13T00:25:08.036671107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036691746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036715289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036744932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036780004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036803027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036826793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036850515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036876228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036900255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036920748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.037590 containerd[1464]: time="2025-09-13T00:25:08.036942170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.043039 containerd[1464]: time="2025-09-13T00:25:08.042010348Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:25:08.043039 containerd[1464]: time="2025-09-13T00:25:08.042064956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.043039 containerd[1464]: time="2025-09-13T00:25:08.042089690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.043039 containerd[1464]: time="2025-09-13T00:25:08.042109768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:25:08.045465 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049460795Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049522816Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049553344Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049585012Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049611495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049636803Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049668727Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:25:08.050995 containerd[1464]: time="2025-09-13T00:25:08.049693763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:25:08.053945 containerd[1464]: time="2025-09-13T00:25:08.053839433Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:25:08.053945 containerd[1464]: time="2025-09-13T00:25:08.053943237Z" level=info msg="Connect containerd service" Sep 13 00:25:08.055066 systemd-networkd[1371]: eth0: Gained IPv6LL Sep 13 00:25:08.056800 containerd[1464]: time="2025-09-13T00:25:08.055060468Z" level=info msg="using legacy CRI server" Sep 13 00:25:08.056800 containerd[1464]: time="2025-09-13T00:25:08.055084463Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:25:08.056800 containerd[1464]: time="2025-09-13T00:25:08.055223469Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:25:08.056366 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:25:08.058632 containerd[1464]: time="2025-09-13T00:25:08.058599432Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:25:08.059197 containerd[1464]: time="2025-09-13T00:25:08.058856658Z" level=info msg="Start subscribing containerd event" Sep 13 00:25:08.059197 containerd[1464]: time="2025-09-13T00:25:08.058918101Z" level=info msg="Start recovering state" Sep 13 00:25:08.059197 containerd[1464]: time="2025-09-13T00:25:08.059024706Z" level=info msg="Start event monitor" Sep 13 00:25:08.059197 containerd[1464]: time="2025-09-13T00:25:08.059043738Z" level=info msg="Start snapshots syncer" Sep 13 00:25:08.059197 containerd[1464]: time="2025-09-13T00:25:08.059059117Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:25:08.059197 containerd[1464]: time="2025-09-13T00:25:08.059070093Z" level=info msg="Start streaming server" Sep 13 00:25:08.060349 containerd[1464]: time="2025-09-13T00:25:08.060323426Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:25:08.061430 containerd[1464]: time="2025-09-13T00:25:08.060556289Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:25:08.061430 containerd[1464]: time="2025-09-13T00:25:08.060636775Z" level=info msg="containerd successfully booted in 0.146866s" Sep 13 00:25:08.065720 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:25:08.076709 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:25:08.090367 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:25:08.108798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:08.124306 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:25:08.143127 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Sep 13 00:25:08.169605 init.sh[1548]: + '[' -e /etc/default/instance_configs.cfg.template ']' Sep 13 00:25:08.172214 init.sh[1548]: + echo -e '[InstanceSetup]\nset_host_keys = false' Sep 13 00:25:08.172214 init.sh[1548]: + /usr/bin/google_instance_setup Sep 13 00:25:08.189529 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:25:08.365568 sshd[1533]: Accepted publickey for core from 147.75.109.163 port 54712 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:08.365468 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:08.392824 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:25:08.411453 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:25:08.432099 systemd-logind[1446]: New session 1 of user core. Sep 13 00:25:08.463548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:25:08.497366 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:25:08.536875 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:25:08.544457 tar[1462]: linux-amd64/LICENSE Sep 13 00:25:08.544457 tar[1462]: linux-amd64/README.md Sep 13 00:25:08.566000 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:25:08.785700 systemd[1560]: Queued start job for default target default.target. Sep 13 00:25:08.792598 systemd[1560]: Created slice app.slice - User Application Slice. Sep 13 00:25:08.792657 systemd[1560]: Reached target paths.target - Paths. Sep 13 00:25:08.792682 systemd[1560]: Reached target timers.target - Timers. Sep 13 00:25:08.796590 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:25:08.825348 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:25:08.825540 systemd[1560]: Reached target sockets.target - Sockets. Sep 13 00:25:08.825570 systemd[1560]: Reached target basic.target - Basic System. Sep 13 00:25:08.825638 systemd[1560]: Reached target default.target - Main User Target. Sep 13 00:25:08.825727 systemd[1560]: Startup finished in 264ms. Sep 13 00:25:08.826030 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:25:08.843219 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:25:08.975287 instance-setup[1554]: INFO Running google_set_multiqueue. Sep 13 00:25:08.994536 instance-setup[1554]: INFO Set channels for eth0 to 2. Sep 13 00:25:08.999939 instance-setup[1554]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Sep 13 00:25:09.001930 instance-setup[1554]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Sep 13 00:25:09.002014 instance-setup[1554]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Sep 13 00:25:09.004759 instance-setup[1554]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Sep 13 00:25:09.004833 instance-setup[1554]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Sep 13 00:25:09.007471 instance-setup[1554]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Sep 13 00:25:09.007526 instance-setup[1554]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Sep 13 00:25:09.009319 instance-setup[1554]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Sep 13 00:25:09.021674 instance-setup[1554]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 13 00:25:09.029001 instance-setup[1554]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Sep 13 00:25:09.031101 instance-setup[1554]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Sep 13 00:25:09.031157 instance-setup[1554]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Sep 13 00:25:09.059809 init.sh[1548]: + /usr/bin/google_metadata_script_runner --script-type startup Sep 13 00:25:09.160345 systemd[1]: Started sshd@1-10.128.0.49:22-147.75.109.163:54714.service - OpenSSH per-connection server daemon (147.75.109.163:54714). Sep 13 00:25:09.298109 startup-script[1603]: INFO Starting startup scripts. Sep 13 00:25:09.303555 startup-script[1603]: INFO No startup scripts found in metadata. Sep 13 00:25:09.303680 startup-script[1603]: INFO Finished running startup scripts. Sep 13 00:25:09.327482 init.sh[1548]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Sep 13 00:25:09.327482 init.sh[1548]: + daemon_pids=() Sep 13 00:25:09.327482 init.sh[1548]: + for d in accounts clock_skew network Sep 13 00:25:09.327482 init.sh[1548]: + daemon_pids+=($!) Sep 13 00:25:09.327482 init.sh[1548]: + for d in accounts clock_skew network Sep 13 00:25:09.327482 init.sh[1548]: + daemon_pids+=($!) Sep 13 00:25:09.327482 init.sh[1548]: + for d in accounts clock_skew network Sep 13 00:25:09.327482 init.sh[1548]: + daemon_pids+=($!) Sep 13 00:25:09.327482 init.sh[1548]: + NOTIFY_SOCKET=/run/systemd/notify Sep 13 00:25:09.327482 init.sh[1548]: + /usr/bin/systemd-notify --ready Sep 13 00:25:09.328610 init.sh[1610]: + /usr/bin/google_clock_skew_daemon Sep 13 00:25:09.329361 init.sh[1611]: + /usr/bin/google_network_daemon Sep 13 00:25:09.329855 init.sh[1609]: + /usr/bin/google_accounts_daemon Sep 13 00:25:09.356663 systemd[1]: Started oem-gce.service - GCE Linux Agent. Sep 13 00:25:09.370257 init.sh[1548]: + wait -n 1609 1610 1611 Sep 13 00:25:09.592662 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 54714 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:09.596991 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:09.612867 systemd-logind[1446]: New session 2 of user core. Sep 13 00:25:09.616177 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:25:09.714287 google-networking[1611]: INFO Starting Google Networking daemon. Sep 13 00:25:09.752570 google-clock-skew[1610]: INFO Starting Google Clock Skew daemon. Sep 13 00:25:09.759498 google-clock-skew[1610]: INFO Clock drift token has changed: 0. Sep 13 00:25:09.798011 groupadd[1622]: group added to /etc/group: name=google-sudoers, GID=1000 Sep 13 00:25:09.802948 groupadd[1622]: group added to /etc/gshadow: name=google-sudoers Sep 13 00:25:09.862076 groupadd[1622]: new group: name=google-sudoers, GID=1000 Sep 13 00:25:09.870348 sshd[1605]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:09.879676 systemd[1]: sshd@1-10.128.0.49:22-147.75.109.163:54714.service: Deactivated successfully. Sep 13 00:25:09.884242 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:25:09.887251 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:25:09.889715 systemd-logind[1446]: Removed session 2. Sep 13 00:25:09.897009 google-accounts[1609]: INFO Starting Google Accounts daemon. Sep 13 00:25:09.909862 google-accounts[1609]: WARNING OS Login not installed. Sep 13 00:25:09.911100 google-accounts[1609]: INFO Creating a new user account for 0. Sep 13 00:25:09.916111 init.sh[1633]: useradd: invalid user name '0': use --badname to ignore Sep 13 00:25:09.916465 google-accounts[1609]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Sep 13 00:25:09.937083 systemd[1]: Started sshd@2-10.128.0.49:22-147.75.109.163:53618.service - OpenSSH per-connection server daemon (147.75.109.163:53618). Sep 13 00:25:10.212264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:10.224138 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:25:10.232495 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:25:10.234825 systemd[1]: Startup finished in 1.001s (kernel) + 10.187s (initrd) + 9.353s (userspace) = 20.542s. Sep 13 00:25:10.331852 sshd[1636]: Accepted publickey for core from 147.75.109.163 port 53618 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:10.333904 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:10.340749 systemd-logind[1446]: New session 3 of user core. Sep 13 00:25:10.345161 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:25:10.608284 sshd[1636]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:10.614773 systemd[1]: sshd@2-10.128.0.49:22-147.75.109.163:53618.service: Deactivated successfully. Sep 13 00:25:10.617637 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:25:10.620191 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:25:10.622510 systemd-logind[1446]: Removed session 3. Sep 13 00:25:11.000957 google-clock-skew[1610]: INFO Synced system time with hardware clock. Sep 13 00:25:11.002045 systemd-resolved[1316]: Clock change detected. Flushing caches. Sep 13 00:25:11.072771 ntpd[1433]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:31%2]:123 Sep 13 00:25:11.073248 ntpd[1433]: 13 Sep 00:25:11 ntpd[1433]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:31%2]:123 Sep 13 00:25:11.363474 kubelet[1643]: E0913 00:25:11.363389 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:25:11.366773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:25:11.367031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:25:11.367518 systemd[1]: kubelet.service: Consumed 1.320s CPU time. Sep 13 00:25:20.918124 systemd[1]: Started sshd@3-10.128.0.49:22-147.75.109.163:37772.service - OpenSSH per-connection server daemon (147.75.109.163:37772). Sep 13 00:25:21.287659 sshd[1659]: Accepted publickey for core from 147.75.109.163 port 37772 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:21.289531 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:21.296201 systemd-logind[1446]: New session 4 of user core. Sep 13 00:25:21.302971 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:25:21.504852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:25:21.512327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:21.564048 sshd[1659]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:21.571046 systemd[1]: sshd@3-10.128.0.49:22-147.75.109.163:37772.service: Deactivated successfully. Sep 13 00:25:21.573338 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:25:21.574366 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:25:21.575962 systemd-logind[1446]: Removed session 4. Sep 13 00:25:21.637533 systemd[1]: Started sshd@4-10.128.0.49:22-147.75.109.163:37776.service - OpenSSH per-connection server daemon (147.75.109.163:37776). Sep 13 00:25:21.842977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:21.847098 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:25:21.905199 kubelet[1676]: E0913 00:25:21.905136 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:25:21.909771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:25:21.910048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:25:22.015299 sshd[1669]: Accepted publickey for core from 147.75.109.163 port 37776 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:22.017129 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:22.022854 systemd-logind[1446]: New session 5 of user core. Sep 13 00:25:22.025965 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:25:22.286703 sshd[1669]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:22.291619 systemd[1]: sshd@4-10.128.0.49:22-147.75.109.163:37776.service: Deactivated successfully. Sep 13 00:25:22.293953 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:25:22.296097 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:25:22.297701 systemd-logind[1446]: Removed session 5. Sep 13 00:25:22.360117 systemd[1]: Started sshd@5-10.128.0.49:22-147.75.109.163:37792.service - OpenSSH per-connection server daemon (147.75.109.163:37792). Sep 13 00:25:22.741040 sshd[1688]: Accepted publickey for core from 147.75.109.163 port 37792 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:22.743147 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:22.749664 systemd-logind[1446]: New session 6 of user core. Sep 13 00:25:22.757991 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:25:23.015906 sshd[1688]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:23.020404 systemd[1]: sshd@5-10.128.0.49:22-147.75.109.163:37792.service: Deactivated successfully. Sep 13 00:25:23.022727 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:25:23.024485 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:25:23.026182 systemd-logind[1446]: Removed session 6. Sep 13 00:25:23.087120 systemd[1]: Started sshd@6-10.128.0.49:22-147.75.109.163:37800.service - OpenSSH per-connection server daemon (147.75.109.163:37800). Sep 13 00:25:23.459558 sshd[1695]: Accepted publickey for core from 147.75.109.163 port 37800 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:23.461429 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:23.467909 systemd-logind[1446]: New session 7 of user core. Sep 13 00:25:23.481948 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:25:23.696508 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:25:23.697071 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:25:23.717527 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 13 00:25:23.775711 sshd[1695]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:23.781401 systemd[1]: sshd@6-10.128.0.49:22-147.75.109.163:37800.service: Deactivated successfully. Sep 13 00:25:23.783673 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:25:23.784602 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:25:23.786283 systemd-logind[1446]: Removed session 7. Sep 13 00:25:23.847130 systemd[1]: Started sshd@7-10.128.0.49:22-147.75.109.163:37806.service - OpenSSH per-connection server daemon (147.75.109.163:37806). Sep 13 00:25:24.222811 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 37806 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:24.224642 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:24.231181 systemd-logind[1446]: New session 8 of user core. Sep 13 00:25:24.240953 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:25:24.448038 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:25:24.448542 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:25:24.453446 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 13 00:25:24.466762 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:25:24.467250 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:25:24.484135 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:25:24.487693 auditctl[1710]: No rules Sep 13 00:25:24.488225 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:25:24.488502 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:25:24.495223 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:25:24.528419 augenrules[1728]: No rules Sep 13 00:25:24.529259 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:25:24.530656 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 13 00:25:24.589551 sshd[1703]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:24.594723 systemd[1]: sshd@7-10.128.0.49:22-147.75.109.163:37806.service: Deactivated successfully. Sep 13 00:25:24.596976 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:25:24.597896 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:25:24.599435 systemd-logind[1446]: Removed session 8. Sep 13 00:25:24.663111 systemd[1]: Started sshd@8-10.128.0.49:22-147.75.109.163:37816.service - OpenSSH per-connection server daemon (147.75.109.163:37816). Sep 13 00:25:25.035243 sshd[1736]: Accepted publickey for core from 147.75.109.163 port 37816 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:25:25.037227 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:25.044297 systemd-logind[1446]: New session 9 of user core. Sep 13 00:25:25.049995 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:25:25.258921 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:25:25.259435 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:25:25.691180 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:25:25.695318 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:25:26.131576 dockerd[1754]: time="2025-09-13T00:25:26.131410945Z" level=info msg="Starting up" Sep 13 00:25:26.275182 systemd[1]: var-lib-docker-metacopy\x2dcheck1507560304-merged.mount: Deactivated successfully. Sep 13 00:25:26.294221 dockerd[1754]: time="2025-09-13T00:25:26.294142838Z" level=info msg="Loading containers: start." Sep 13 00:25:26.438770 kernel: Initializing XFRM netlink socket Sep 13 00:25:26.544769 systemd-networkd[1371]: docker0: Link UP Sep 13 00:25:26.562900 dockerd[1754]: time="2025-09-13T00:25:26.562842255Z" level=info msg="Loading containers: done." Sep 13 00:25:26.590957 dockerd[1754]: time="2025-09-13T00:25:26.590890216Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:25:26.591169 dockerd[1754]: time="2025-09-13T00:25:26.591020093Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:25:26.591240 dockerd[1754]: time="2025-09-13T00:25:26.591174187Z" level=info msg="Daemon has completed initialization" Sep 13 00:25:26.629408 dockerd[1754]: time="2025-09-13T00:25:26.629308813Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:25:26.629974 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:25:27.509555 containerd[1464]: time="2025-09-13T00:25:27.509203134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:25:28.094702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534877565.mount: Deactivated successfully. Sep 13 00:25:29.602106 containerd[1464]: time="2025-09-13T00:25:29.602031785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:29.603762 containerd[1464]: time="2025-09-13T00:25:29.603692164Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28124707" Sep 13 00:25:29.605404 containerd[1464]: time="2025-09-13T00:25:29.604805315Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:29.608499 containerd[1464]: time="2025-09-13T00:25:29.608455052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:29.610253 containerd[1464]: time="2025-09-13T00:25:29.610208400Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.100948451s" Sep 13 00:25:29.610401 containerd[1464]: time="2025-09-13T00:25:29.610374684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:25:29.611542 containerd[1464]: time="2025-09-13T00:25:29.611508821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:25:31.103267 containerd[1464]: time="2025-09-13T00:25:31.103200873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:31.104923 containerd[1464]: time="2025-09-13T00:25:31.104852941Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24718566" Sep 13 00:25:31.106541 containerd[1464]: time="2025-09-13T00:25:31.106018654Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:31.111882 containerd[1464]: time="2025-09-13T00:25:31.111841314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:31.115895 containerd[1464]: time="2025-09-13T00:25:31.115236252Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.503682281s" Sep 13 00:25:31.116005 containerd[1464]: time="2025-09-13T00:25:31.115903780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:25:31.117240 containerd[1464]: time="2025-09-13T00:25:31.117198491Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:25:32.160477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:25:32.167505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:32.476392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:32.486333 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:25:32.527779 containerd[1464]: time="2025-09-13T00:25:32.526922907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:32.529649 containerd[1464]: time="2025-09-13T00:25:32.529583236Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18789614" Sep 13 00:25:32.531294 containerd[1464]: time="2025-09-13T00:25:32.531250486Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:32.538828 containerd[1464]: time="2025-09-13T00:25:32.538786487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:32.544378 containerd[1464]: time="2025-09-13T00:25:32.544329294Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.427091719s" Sep 13 00:25:32.545035 containerd[1464]: time="2025-09-13T00:25:32.544381760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:25:32.546061 containerd[1464]: time="2025-09-13T00:25:32.545215975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:25:32.546166 kubelet[1966]: E0913 00:25:32.546003 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:25:32.549338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:25:32.549591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:25:33.884045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546436857.mount: Deactivated successfully. Sep 13 00:25:34.536822 containerd[1464]: time="2025-09-13T00:25:34.536732947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:34.538240 containerd[1464]: time="2025-09-13T00:25:34.538003355Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30412147" Sep 13 00:25:34.539903 containerd[1464]: time="2025-09-13T00:25:34.539362083Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:34.542526 containerd[1464]: time="2025-09-13T00:25:34.542477294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:34.543379 containerd[1464]: time="2025-09-13T00:25:34.543332346Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.998073895s" Sep 13 00:25:34.543498 containerd[1464]: time="2025-09-13T00:25:34.543385185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:25:34.544037 containerd[1464]: time="2025-09-13T00:25:34.544006241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:25:34.932369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685370264.mount: Deactivated successfully. Sep 13 00:25:36.131366 containerd[1464]: time="2025-09-13T00:25:36.131301743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:36.133081 containerd[1464]: time="2025-09-13T00:25:36.133020106Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Sep 13 00:25:36.134161 containerd[1464]: time="2025-09-13T00:25:36.134090368Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:36.139779 containerd[1464]: time="2025-09-13T00:25:36.139483150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:36.141160 containerd[1464]: time="2025-09-13T00:25:36.141113225Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.596955039s" Sep 13 00:25:36.141262 containerd[1464]: time="2025-09-13T00:25:36.141164351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:25:36.141876 containerd[1464]: time="2025-09-13T00:25:36.141843531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:25:36.512811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665715479.mount: Deactivated successfully. Sep 13 00:25:36.519290 containerd[1464]: time="2025-09-13T00:25:36.519235963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:36.520434 containerd[1464]: time="2025-09-13T00:25:36.520366979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Sep 13 00:25:36.521112 containerd[1464]: time="2025-09-13T00:25:36.521051432Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:36.523948 containerd[1464]: time="2025-09-13T00:25:36.523884803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:36.525246 containerd[1464]: time="2025-09-13T00:25:36.524950093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 383.059879ms" Sep 13 00:25:36.525246 containerd[1464]: time="2025-09-13T00:25:36.524995075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:25:36.526267 containerd[1464]: time="2025-09-13T00:25:36.525928974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:25:36.917940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541619710.mount: Deactivated successfully. Sep 13 00:25:38.131673 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:25:39.455278 containerd[1464]: time="2025-09-13T00:25:39.455210461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:39.456997 containerd[1464]: time="2025-09-13T00:25:39.456932786Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56918218" Sep 13 00:25:39.457769 containerd[1464]: time="2025-09-13T00:25:39.457708250Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:39.461564 containerd[1464]: time="2025-09-13T00:25:39.461502217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:39.463375 containerd[1464]: time="2025-09-13T00:25:39.463209081Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.937235603s" Sep 13 00:25:39.463375 containerd[1464]: time="2025-09-13T00:25:39.463255500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:25:42.604017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:25:42.615276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:42.971008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:42.975172 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:25:43.044303 kubelet[2123]: E0913 00:25:43.044245 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:25:43.048811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:25:43.049071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:25:43.543817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:43.550117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:43.605416 systemd[1]: Reloading requested from client PID 2137 ('systemctl') (unit session-9.scope)... Sep 13 00:25:43.605442 systemd[1]: Reloading... Sep 13 00:25:43.783846 zram_generator::config[2177]: No configuration found. Sep 13 00:25:43.919865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:25:44.021949 systemd[1]: Reloading finished in 415 ms. Sep 13 00:25:44.094047 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:25:44.094183 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:25:44.094510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:44.108253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:44.569970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:44.570995 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:25:44.627467 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:44.627467 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:25:44.627467 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:44.628061 kubelet[2229]: I0913 00:25:44.627546 2229 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:25:45.126912 kubelet[2229]: I0913 00:25:45.126860 2229 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:25:45.126912 kubelet[2229]: I0913 00:25:45.126894 2229 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:25:45.127278 kubelet[2229]: I0913 00:25:45.127241 2229 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:25:45.164854 kubelet[2229]: E0913 00:25:45.164804 2229 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:45.168354 kubelet[2229]: I0913 00:25:45.168133 2229 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:25:45.180768 kubelet[2229]: E0913 00:25:45.179726 2229 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:25:45.180768 kubelet[2229]: I0913 00:25:45.179796 2229 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:25:45.188020 kubelet[2229]: I0913 00:25:45.187970 2229 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:25:45.189224 kubelet[2229]: I0913 00:25:45.189174 2229 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:25:45.189495 kubelet[2229]: I0913 00:25:45.189442 2229 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:25:45.189760 kubelet[2229]: I0913 00:25:45.189484 2229 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:25:45.189760 kubelet[2229]: I0913 00:25:45.189753 2229 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:25:45.190018 kubelet[2229]: I0913 00:25:45.189773 2229 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:25:45.190018 kubelet[2229]: I0913 00:25:45.189917 2229 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:45.195130 kubelet[2229]: I0913 00:25:45.195087 2229 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:25:45.195130 kubelet[2229]: I0913 00:25:45.195125 2229 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:25:45.195692 kubelet[2229]: I0913 00:25:45.195172 2229 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:25:45.195692 kubelet[2229]: I0913 00:25:45.195198 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:25:45.200314 kubelet[2229]: W0913 00:25:45.200256 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:45.200478 kubelet[2229]: E0913 00:25:45.200451 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:45.202330 kubelet[2229]: W0913 00:25:45.202276 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:45.202483 kubelet[2229]: E0913 00:25:45.202458 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:45.203461 kubelet[2229]: I0913 00:25:45.202637 2229 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:25:45.203461 kubelet[2229]: I0913 00:25:45.203283 2229 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:25:45.204244 kubelet[2229]: W0913 00:25:45.204202 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:25:45.207883 kubelet[2229]: I0913 00:25:45.207852 2229 server.go:1274] "Started kubelet" Sep 13 00:25:45.209762 kubelet[2229]: I0913 00:25:45.209084 2229 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:25:45.211083 kubelet[2229]: I0913 00:25:45.211037 2229 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:25:45.215855 kubelet[2229]: I0913 00:25:45.215729 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:25:45.216844 kubelet[2229]: I0913 00:25:45.216495 2229 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:25:45.217977 kubelet[2229]: I0913 00:25:45.217056 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:25:45.219326 kubelet[2229]: E0913 00:25:45.216763 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf.1864afe3439838d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,UID:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,},FirstTimestamp:2025-09-13 00:25:45.207822547 +0000 UTC m=+0.629422777,LastTimestamp:2025-09-13 00:25:45.207822547 +0000 UTC m=+0.629422777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,}" Sep 13 00:25:45.222769 kubelet[2229]: I0913 00:25:45.222661 2229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:25:45.228419 kubelet[2229]: I0913 00:25:45.227718 2229 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:25:45.228419 kubelet[2229]: E0913 00:25:45.227977 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" not found" Sep 13 00:25:45.228646 kubelet[2229]: E0913 00:25:45.228585 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="200ms" Sep 13 00:25:45.228924 kubelet[2229]: I0913 00:25:45.228896 2229 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:25:45.229050 kubelet[2229]: I0913 00:25:45.229023 2229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:25:45.231563 kubelet[2229]: E0913 00:25:45.231533 2229 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:25:45.231849 kubelet[2229]: I0913 00:25:45.231823 2229 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:25:45.232836 kubelet[2229]: I0913 00:25:45.232816 2229 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:25:45.232991 kubelet[2229]: I0913 00:25:45.232978 2229 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:25:45.239879 kubelet[2229]: W0913 00:25:45.239810 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:45.240287 kubelet[2229]: E0913 00:25:45.240256 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:45.252851 kubelet[2229]: I0913 00:25:45.252654 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:25:45.254592 kubelet[2229]: I0913 00:25:45.254171 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:25:45.254592 kubelet[2229]: I0913 00:25:45.254196 2229 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:25:45.254592 kubelet[2229]: I0913 00:25:45.254215 2229 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:25:45.254592 kubelet[2229]: E0913 00:25:45.254265 2229 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:25:45.266211 kubelet[2229]: W0913 00:25:45.266164 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:45.266368 kubelet[2229]: E0913 00:25:45.266339 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:45.277594 kubelet[2229]: I0913 00:25:45.277551 2229 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:25:45.277594 kubelet[2229]: I0913 00:25:45.277572 2229 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:25:45.277594 kubelet[2229]: I0913 00:25:45.277596 2229 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:45.280559 kubelet[2229]: I0913 00:25:45.280457 2229 policy_none.go:49] "None policy: Start" Sep 13 00:25:45.281530 kubelet[2229]: I0913 00:25:45.281174 2229 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:25:45.281530 kubelet[2229]: I0913 00:25:45.281266 2229 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:25:45.288480 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:25:45.299876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:25:45.311234 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:25:45.313477 kubelet[2229]: I0913 00:25:45.313450 2229 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:25:45.313971 kubelet[2229]: I0913 00:25:45.313888 2229 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:25:45.314131 kubelet[2229]: I0913 00:25:45.314085 2229 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:25:45.314460 kubelet[2229]: I0913 00:25:45.314439 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:25:45.317159 kubelet[2229]: E0913 00:25:45.317117 2229 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" not found" Sep 13 00:25:45.379049 systemd[1]: Created slice kubepods-burstable-podc6e39997a6872d80a555d833062bd884.slice - libcontainer container kubepods-burstable-podc6e39997a6872d80a555d833062bd884.slice. Sep 13 00:25:45.393093 systemd[1]: Created slice kubepods-burstable-pod7136c7a3b0878f84720150c4b08f959e.slice - libcontainer container kubepods-burstable-pod7136c7a3b0878f84720150c4b08f959e.slice. Sep 13 00:25:45.409625 systemd[1]: Created slice kubepods-burstable-pod98e501346867cb09d09a9ffe119a85c9.slice - libcontainer container kubepods-burstable-pod98e501346867cb09d09a9ffe119a85c9.slice. Sep 13 00:25:45.419835 kubelet[2229]: I0913 00:25:45.419777 2229 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.420253 kubelet[2229]: E0913 00:25:45.420209 2229 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.429707 kubelet[2229]: E0913 00:25:45.429661 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="400ms" Sep 13 00:25:45.536158 kubelet[2229]: I0913 00:25:45.536082 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6e39997a6872d80a555d833062bd884-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"c6e39997a6872d80a555d833062bd884\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536158 kubelet[2229]: I0913 00:25:45.536149 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536417 kubelet[2229]: I0913 00:25:45.536180 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536417 kubelet[2229]: I0913 00:25:45.536220 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6e39997a6872d80a555d833062bd884-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"c6e39997a6872d80a555d833062bd884\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536417 kubelet[2229]: I0913 00:25:45.536246 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536417 kubelet[2229]: I0913 00:25:45.536272 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536612 kubelet[2229]: I0913 00:25:45.536318 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536612 kubelet[2229]: I0913 00:25:45.536346 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98e501346867cb09d09a9ffe119a85c9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"98e501346867cb09d09a9ffe119a85c9\") " pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.536612 kubelet[2229]: I0913 00:25:45.536393 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6e39997a6872d80a555d833062bd884-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"c6e39997a6872d80a555d833062bd884\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.626214 kubelet[2229]: I0913 00:25:45.626176 2229 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.626651 kubelet[2229]: E0913 00:25:45.626586 2229 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:45.691665 containerd[1464]: time="2025-09-13T00:25:45.691515387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,Uid:c6e39997a6872d80a555d833062bd884,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:45.707537 containerd[1464]: time="2025-09-13T00:25:45.707468753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,Uid:7136c7a3b0878f84720150c4b08f959e,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:45.713914 containerd[1464]: time="2025-09-13T00:25:45.713869782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,Uid:98e501346867cb09d09a9ffe119a85c9,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:45.830547 kubelet[2229]: E0913 00:25:45.830488 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="800ms" Sep 13 00:25:46.037569 kubelet[2229]: I0913 00:25:46.037011 2229 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:46.037569 kubelet[2229]: E0913 00:25:46.037436 2229 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.49:6443/api/v1/nodes\": dial tcp 10.128.0.49:6443: connect: connection refused" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:46.053145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933387097.mount: Deactivated successfully. Sep 13 00:25:46.062212 containerd[1464]: time="2025-09-13T00:25:46.062134126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:46.063637 containerd[1464]: time="2025-09-13T00:25:46.063562030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:46.064879 containerd[1464]: time="2025-09-13T00:25:46.064782362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:25:46.065334 containerd[1464]: time="2025-09-13T00:25:46.065270698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:25:46.066135 containerd[1464]: time="2025-09-13T00:25:46.066094557Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:46.067768 containerd[1464]: time="2025-09-13T00:25:46.067690140Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:46.068323 containerd[1464]: time="2025-09-13T00:25:46.068240467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Sep 13 00:25:46.071761 containerd[1464]: time="2025-09-13T00:25:46.070286459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:25:46.074150 containerd[1464]: time="2025-09-13T00:25:46.074104963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 366.542147ms" Sep 13 00:25:46.076970 containerd[1464]: time="2025-09-13T00:25:46.076920588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 385.264981ms" Sep 13 00:25:46.079305 containerd[1464]: time="2025-09-13T00:25:46.079253162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 365.298124ms" Sep 13 00:25:46.144812 kubelet[2229]: W0913 00:25:46.143184 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:46.144812 kubelet[2229]: E0913 00:25:46.143272 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:46.283542 containerd[1464]: time="2025-09-13T00:25:46.282624284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.285473682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.285548502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.285640979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.285789553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.282734143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.283891809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:46.286484 containerd[1464]: time="2025-09-13T00:25:46.286285521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:46.300677 containerd[1464]: time="2025-09-13T00:25:46.294068386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:25:46.300677 containerd[1464]: time="2025-09-13T00:25:46.298979829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:25:46.300677 containerd[1464]: time="2025-09-13T00:25:46.299058746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:46.301068 containerd[1464]: time="2025-09-13T00:25:46.299263383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:46.318434 kubelet[2229]: W0913 00:25:46.318385 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:46.318564 kubelet[2229]: E0913 00:25:46.318454 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:46.333097 systemd[1]: Started cri-containerd-f352b8e1cb971db07b846def03f12f61075b5cef82f275c28f2f1f28386ae095.scope - libcontainer container f352b8e1cb971db07b846def03f12f61075b5cef82f275c28f2f1f28386ae095. Sep 13 00:25:46.344043 systemd[1]: Started cri-containerd-c7e0063d9c684f430f35199788b245f591f26d8e2f783a0d98df8e97ae92892d.scope - libcontainer container c7e0063d9c684f430f35199788b245f591f26d8e2f783a0d98df8e97ae92892d. Sep 13 00:25:46.350308 kubelet[2229]: W0913 00:25:46.350264 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:46.350425 kubelet[2229]: E0913 00:25:46.350330 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:46.351052 systemd[1]: Started cri-containerd-6dadc060d3aad5e39ac436d2d81670b07ddc00e34fa9fe4b9d0d625766037e6c.scope - libcontainer container 6dadc060d3aad5e39ac436d2d81670b07ddc00e34fa9fe4b9d0d625766037e6c. Sep 13 00:25:46.367104 kubelet[2229]: W0913 00:25:46.367039 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf&limit=500&resourceVersion=0": dial tcp 10.128.0.49:6443: connect: connection refused Sep 13 00:25:46.367258 kubelet[2229]: E0913 00:25:46.367128 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf&limit=500&resourceVersion=0\": dial tcp 10.128.0.49:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:25:46.443452 containerd[1464]: time="2025-09-13T00:25:46.443397411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,Uid:c6e39997a6872d80a555d833062bd884,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dadc060d3aad5e39ac436d2d81670b07ddc00e34fa9fe4b9d0d625766037e6c\"" Sep 13 00:25:46.452839 kubelet[2229]: E0913 00:25:46.452528 2229 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf830" Sep 13 00:25:46.455945 containerd[1464]: time="2025-09-13T00:25:46.455658304Z" level=info msg="CreateContainer within sandbox \"6dadc060d3aad5e39ac436d2d81670b07ddc00e34fa9fe4b9d0d625766037e6c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:25:46.468370 containerd[1464]: time="2025-09-13T00:25:46.467643358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,Uid:7136c7a3b0878f84720150c4b08f959e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f352b8e1cb971db07b846def03f12f61075b5cef82f275c28f2f1f28386ae095\"" Sep 13 00:25:46.473625 kubelet[2229]: E0913 00:25:46.472757 2229 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1" Sep 13 00:25:46.476818 containerd[1464]: time="2025-09-13T00:25:46.476765648Z" level=info msg="CreateContainer within sandbox \"f352b8e1cb971db07b846def03f12f61075b5cef82f275c28f2f1f28386ae095\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:25:46.485610 containerd[1464]: time="2025-09-13T00:25:46.485535155Z" level=info msg="CreateContainer within sandbox \"6dadc060d3aad5e39ac436d2d81670b07ddc00e34fa9fe4b9d0d625766037e6c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f231034f0023f1d600f5f0cbdf1e32c5da9762f2f958c54f1fcdd9f72eaadb5a\"" Sep 13 00:25:46.487595 containerd[1464]: time="2025-09-13T00:25:46.487561417Z" level=info msg="StartContainer for \"f231034f0023f1d600f5f0cbdf1e32c5da9762f2f958c54f1fcdd9f72eaadb5a\"" Sep 13 00:25:46.493400 containerd[1464]: time="2025-09-13T00:25:46.493352099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,Uid:98e501346867cb09d09a9ffe119a85c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e0063d9c684f430f35199788b245f591f26d8e2f783a0d98df8e97ae92892d\"" Sep 13 00:25:46.495969 kubelet[2229]: E0913 00:25:46.495917 2229 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf830" Sep 13 00:25:46.498361 containerd[1464]: time="2025-09-13T00:25:46.498320301Z" level=info msg="CreateContainer within sandbox \"c7e0063d9c684f430f35199788b245f591f26d8e2f783a0d98df8e97ae92892d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:25:46.503396 containerd[1464]: time="2025-09-13T00:25:46.503351828Z" level=info msg="CreateContainer within sandbox \"f352b8e1cb971db07b846def03f12f61075b5cef82f275c28f2f1f28386ae095\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8bd4b8cae9ec9e00f120e5b9601abc23ade09c49c170d689982f6dde2f1e7360\"" Sep 13 00:25:46.504670 containerd[1464]: time="2025-09-13T00:25:46.504608970Z" level=info msg="StartContainer for \"8bd4b8cae9ec9e00f120e5b9601abc23ade09c49c170d689982f6dde2f1e7360\"" Sep 13 00:25:46.524038 containerd[1464]: time="2025-09-13T00:25:46.523945426Z" level=info msg="CreateContainer within sandbox \"c7e0063d9c684f430f35199788b245f591f26d8e2f783a0d98df8e97ae92892d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8c31e9f3c9750de3566d97296799728e05d0966be3f3ef0bd1c72f663a0bf78\"" Sep 13 00:25:46.526271 containerd[1464]: time="2025-09-13T00:25:46.524770824Z" level=info msg="StartContainer for \"c8c31e9f3c9750de3566d97296799728e05d0966be3f3ef0bd1c72f663a0bf78\"" Sep 13 00:25:46.536981 systemd[1]: Started cri-containerd-f231034f0023f1d600f5f0cbdf1e32c5da9762f2f958c54f1fcdd9f72eaadb5a.scope - libcontainer container f231034f0023f1d600f5f0cbdf1e32c5da9762f2f958c54f1fcdd9f72eaadb5a. Sep 13 00:25:46.567976 systemd[1]: Started cri-containerd-8bd4b8cae9ec9e00f120e5b9601abc23ade09c49c170d689982f6dde2f1e7360.scope - libcontainer container 8bd4b8cae9ec9e00f120e5b9601abc23ade09c49c170d689982f6dde2f1e7360. Sep 13 00:25:46.604964 systemd[1]: Started cri-containerd-c8c31e9f3c9750de3566d97296799728e05d0966be3f3ef0bd1c72f663a0bf78.scope - libcontainer container c8c31e9f3c9750de3566d97296799728e05d0966be3f3ef0bd1c72f663a0bf78. Sep 13 00:25:46.631090 kubelet[2229]: E0913 00:25:46.631036 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf?timeout=10s\": dial tcp 10.128.0.49:6443: connect: connection refused" interval="1.6s" Sep 13 00:25:46.654935 containerd[1464]: time="2025-09-13T00:25:46.654238822Z" level=info msg="StartContainer for \"f231034f0023f1d600f5f0cbdf1e32c5da9762f2f958c54f1fcdd9f72eaadb5a\" returns successfully" Sep 13 00:25:46.720322 containerd[1464]: time="2025-09-13T00:25:46.720268745Z" level=info msg="StartContainer for \"8bd4b8cae9ec9e00f120e5b9601abc23ade09c49c170d689982f6dde2f1e7360\" returns successfully" Sep 13 00:25:46.751061 containerd[1464]: time="2025-09-13T00:25:46.751008703Z" level=info msg="StartContainer for \"c8c31e9f3c9750de3566d97296799728e05d0966be3f3ef0bd1c72f663a0bf78\" returns successfully" Sep 13 00:25:46.846914 kubelet[2229]: I0913 00:25:46.846157 2229 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:49.588535 kubelet[2229]: E0913 00:25:49.588450 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" not found" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:49.624679 kubelet[2229]: E0913 00:25:49.624367 2229 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf.1864afe3439838d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,UID:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,},FirstTimestamp:2025-09-13 00:25:45.207822547 +0000 UTC m=+0.629422777,LastTimestamp:2025-09-13 00:25:45.207822547 +0000 UTC m=+0.629422777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf,}" Sep 13 00:25:49.687772 kubelet[2229]: I0913 00:25:49.686230 2229 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:49.687772 kubelet[2229]: E0913 00:25:49.686286 2229 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\": node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" not found" Sep 13 00:25:50.205402 kubelet[2229]: I0913 00:25:50.205349 2229 apiserver.go:52] "Watching apiserver" Sep 13 00:25:50.233901 kubelet[2229]: I0913 00:25:50.233854 2229 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:25:51.553285 systemd[1]: Reloading requested from client PID 2504 ('systemctl') (unit session-9.scope)... Sep 13 00:25:51.553307 systemd[1]: Reloading... Sep 13 00:25:51.689061 kubelet[2229]: W0913 00:25:51.689020 2229 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:25:51.743777 zram_generator::config[2551]: No configuration found. Sep 13 00:25:51.869122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:25:52.000868 systemd[1]: Reloading finished in 446 ms. Sep 13 00:25:52.053363 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:52.064160 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:25:52.064458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:52.064528 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 130.6M memory peak, 0B memory swap peak. Sep 13 00:25:52.073132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:25:52.442343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:25:52.457335 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:25:52.530784 kubelet[2592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:52.530784 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:25:52.530784 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:25:52.530784 kubelet[2592]: I0913 00:25:52.530291 2592 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:25:52.542354 kubelet[2592]: I0913 00:25:52.542307 2592 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:25:52.542565 kubelet[2592]: I0913 00:25:52.542546 2592 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:25:52.543764 kubelet[2592]: I0913 00:25:52.543035 2592 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:25:52.545513 kubelet[2592]: I0913 00:25:52.545481 2592 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:25:52.549996 kubelet[2592]: I0913 00:25:52.549279 2592 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:25:52.554639 kubelet[2592]: E0913 00:25:52.554606 2592 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:25:52.554639 kubelet[2592]: I0913 00:25:52.554639 2592 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:25:52.558674 kubelet[2592]: I0913 00:25:52.558624 2592 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:25:52.559008 kubelet[2592]: I0913 00:25:52.558985 2592 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:25:52.560601 kubelet[2592]: I0913 00:25:52.559249 2592 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:25:52.561246 kubelet[2592]: I0913 00:25:52.559308 2592 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:25:52.561246 kubelet[2592]: I0913 00:25:52.561241 2592 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:25:52.561474 kubelet[2592]: I0913 00:25:52.561259 2592 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:25:52.561474 kubelet[2592]: I0913 00:25:52.561301 2592 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:52.561474 kubelet[2592]: I0913 00:25:52.561452 2592 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:25:52.561474 kubelet[2592]: I0913 00:25:52.561471 2592 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:25:52.561672 kubelet[2592]: I0913 00:25:52.561517 2592 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:25:52.561672 kubelet[2592]: I0913 00:25:52.561533 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:25:52.568928 kubelet[2592]: I0913 00:25:52.568902 2592 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:25:52.569668 kubelet[2592]: I0913 00:25:52.569642 2592 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:25:52.572879 kubelet[2592]: I0913 00:25:52.572855 2592 server.go:1274] "Started kubelet" Sep 13 00:25:52.579033 kubelet[2592]: I0913 00:25:52.579010 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:25:52.581477 kubelet[2592]: I0913 00:25:52.581039 2592 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:25:52.582812 kubelet[2592]: I0913 00:25:52.582788 2592 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:25:52.585682 kubelet[2592]: I0913 00:25:52.584332 2592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:25:52.585682 kubelet[2592]: I0913 00:25:52.584616 2592 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:25:52.585682 kubelet[2592]: I0913 00:25:52.584925 2592 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:25:52.587918 kubelet[2592]: I0913 00:25:52.587398 2592 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:25:52.587918 kubelet[2592]: E0913 00:25:52.587691 2592 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" not found" Sep 13 00:25:52.588646 kubelet[2592]: I0913 00:25:52.588566 2592 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:25:52.588767 kubelet[2592]: I0913 00:25:52.588750 2592 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:25:52.597148 kubelet[2592]: I0913 00:25:52.597119 2592 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:25:52.597449 kubelet[2592]: I0913 00:25:52.597410 2592 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:25:52.609458 kubelet[2592]: I0913 00:25:52.609434 2592 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:25:52.627934 kubelet[2592]: E0913 00:25:52.627901 2592 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:25:52.656604 kubelet[2592]: I0913 00:25:52.656547 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:25:52.663397 kubelet[2592]: I0913 00:25:52.663327 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:25:52.663669 kubelet[2592]: I0913 00:25:52.663643 2592 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:25:52.663793 kubelet[2592]: I0913 00:25:52.663679 2592 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:25:52.663793 kubelet[2592]: E0913 00:25:52.663771 2592 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:25:52.725380 kubelet[2592]: I0913 00:25:52.724895 2592 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:25:52.725380 kubelet[2592]: I0913 00:25:52.724918 2592 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:25:52.725380 kubelet[2592]: I0913 00:25:52.724945 2592 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:25:52.725380 kubelet[2592]: I0913 00:25:52.725140 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:25:52.725380 kubelet[2592]: I0913 00:25:52.725152 2592 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:25:52.725380 kubelet[2592]: I0913 00:25:52.725172 2592 policy_none.go:49] "None policy: Start" Sep 13 00:25:52.728167 kubelet[2592]: I0913 00:25:52.727651 2592 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:25:52.728167 kubelet[2592]: I0913 00:25:52.727735 2592 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:25:52.728167 kubelet[2592]: I0913 00:25:52.728015 2592 state_mem.go:75] "Updated machine memory state" Sep 13 00:25:52.737775 kubelet[2592]: I0913 00:25:52.737552 2592 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:25:52.738196 kubelet[2592]: I0913 00:25:52.738169 2592 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:25:52.739328 kubelet[2592]: I0913 00:25:52.738475 2592 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:25:52.739328 kubelet[2592]: I0913 00:25:52.738802 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:25:52.775948 kubelet[2592]: W0913 00:25:52.775866 2592 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:25:52.776621 kubelet[2592]: E0913 00:25:52.776518 2592 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.777005 kubelet[2592]: W0913 00:25:52.776916 2592 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:25:52.778414 kubelet[2592]: W0913 00:25:52.778378 2592 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:25:52.792885 kubelet[2592]: I0913 00:25:52.792567 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.792885 kubelet[2592]: I0913 00:25:52.792649 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.792885 kubelet[2592]: I0913 00:25:52.792683 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6e39997a6872d80a555d833062bd884-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"c6e39997a6872d80a555d833062bd884\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.792885 kubelet[2592]: I0913 00:25:52.792715 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6e39997a6872d80a555d833062bd884-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"c6e39997a6872d80a555d833062bd884\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.793225 kubelet[2592]: I0913 00:25:52.792767 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.793225 kubelet[2592]: I0913 00:25:52.792797 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.793225 kubelet[2592]: I0913 00:25:52.792828 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7136c7a3b0878f84720150c4b08f959e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"7136c7a3b0878f84720150c4b08f959e\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.793225 kubelet[2592]: I0913 00:25:52.792856 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98e501346867cb09d09a9ffe119a85c9-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"98e501346867cb09d09a9ffe119a85c9\") " pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.793459 kubelet[2592]: I0913 00:25:52.792894 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6e39997a6872d80a555d833062bd884-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" (UID: \"c6e39997a6872d80a555d833062bd884\") " pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.863825 kubelet[2592]: I0913 00:25:52.861784 2592 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.872588 kubelet[2592]: I0913 00:25:52.872533 2592 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.872801 kubelet[2592]: I0913 00:25:52.872627 2592 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:52.932360 update_engine[1453]: I20250913 00:25:52.931759 1453 update_attempter.cc:509] Updating boot flags... Sep 13 00:25:53.010769 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2639) Sep 13 00:25:53.169301 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2635) Sep 13 00:25:53.357870 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2635) Sep 13 00:25:53.566432 kubelet[2592]: I0913 00:25:53.566294 2592 apiserver.go:52] "Watching apiserver" Sep 13 00:25:53.589771 kubelet[2592]: I0913 00:25:53.589703 2592 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:25:53.711184 kubelet[2592]: W0913 00:25:53.711085 2592 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Sep 13 00:25:53.711184 kubelet[2592]: E0913 00:25:53.711167 2592 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:25:53.767277 kubelet[2592]: I0913 00:25:53.767038 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" podStartSLOduration=1.7670140810000001 podStartE2EDuration="1.767014081s" podCreationTimestamp="2025-09-13 00:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:53.755173965 +0000 UTC m=+1.291450647" watchObservedRunningTime="2025-09-13 00:25:53.767014081 +0000 UTC m=+1.303290770" Sep 13 00:25:53.782281 kubelet[2592]: I0913 00:25:53.781873 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" podStartSLOduration=2.781852858 podStartE2EDuration="2.781852858s" podCreationTimestamp="2025-09-13 00:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:53.767855213 +0000 UTC m=+1.304131913" watchObservedRunningTime="2025-09-13 00:25:53.781852858 +0000 UTC m=+1.318129544" Sep 13 00:25:53.800777 kubelet[2592]: I0913 00:25:53.800503 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" podStartSLOduration=1.800481337 podStartE2EDuration="1.800481337s" podCreationTimestamp="2025-09-13 00:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:53.783263284 +0000 UTC m=+1.319539972" watchObservedRunningTime="2025-09-13 00:25:53.800481337 +0000 UTC m=+1.336758010" Sep 13 00:25:58.199378 kubelet[2592]: I0913 00:25:58.199173 2592 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:25:58.200081 containerd[1464]: time="2025-09-13T00:25:58.200001299Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:25:58.200890 kubelet[2592]: I0913 00:25:58.200820 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:25:59.210231 systemd[1]: Created slice kubepods-besteffort-pod6d2afca2_c512_4fd2_93f9_9c7f43279699.slice - libcontainer container kubepods-besteffort-pod6d2afca2_c512_4fd2_93f9_9c7f43279699.slice. Sep 13 00:25:59.233781 kubelet[2592]: I0913 00:25:59.233213 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d2afca2-c512-4fd2-93f9-9c7f43279699-kube-proxy\") pod \"kube-proxy-pqg9v\" (UID: \"6d2afca2-c512-4fd2-93f9-9c7f43279699\") " pod="kube-system/kube-proxy-pqg9v" Sep 13 00:25:59.233781 kubelet[2592]: I0913 00:25:59.233261 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d2afca2-c512-4fd2-93f9-9c7f43279699-xtables-lock\") pod \"kube-proxy-pqg9v\" (UID: \"6d2afca2-c512-4fd2-93f9-9c7f43279699\") " pod="kube-system/kube-proxy-pqg9v" Sep 13 00:25:59.233781 kubelet[2592]: I0913 00:25:59.233297 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d2afca2-c512-4fd2-93f9-9c7f43279699-lib-modules\") pod \"kube-proxy-pqg9v\" (UID: \"6d2afca2-c512-4fd2-93f9-9c7f43279699\") " pod="kube-system/kube-proxy-pqg9v" Sep 13 00:25:59.233781 kubelet[2592]: I0913 00:25:59.233324 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blr49\" (UniqueName: \"kubernetes.io/projected/6d2afca2-c512-4fd2-93f9-9c7f43279699-kube-api-access-blr49\") pod \"kube-proxy-pqg9v\" (UID: \"6d2afca2-c512-4fd2-93f9-9c7f43279699\") " pod="kube-system/kube-proxy-pqg9v" Sep 13 00:25:59.347793 systemd[1]: Created slice kubepods-besteffort-pod5a10cc79_8fb6_4499_8ff0_21de284332f7.slice - libcontainer container kubepods-besteffort-pod5a10cc79_8fb6_4499_8ff0_21de284332f7.slice. Sep 13 00:25:59.434413 kubelet[2592]: I0913 00:25:59.434351 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ht2b\" (UniqueName: \"kubernetes.io/projected/5a10cc79-8fb6-4499-8ff0-21de284332f7-kube-api-access-2ht2b\") pod \"tigera-operator-58fc44c59b-5jhtv\" (UID: \"5a10cc79-8fb6-4499-8ff0-21de284332f7\") " pod="tigera-operator/tigera-operator-58fc44c59b-5jhtv" Sep 13 00:25:59.434633 kubelet[2592]: I0913 00:25:59.434463 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a10cc79-8fb6-4499-8ff0-21de284332f7-var-lib-calico\") pod \"tigera-operator-58fc44c59b-5jhtv\" (UID: \"5a10cc79-8fb6-4499-8ff0-21de284332f7\") " pod="tigera-operator/tigera-operator-58fc44c59b-5jhtv" Sep 13 00:25:59.522782 containerd[1464]: time="2025-09-13T00:25:59.522131056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqg9v,Uid:6d2afca2-c512-4fd2-93f9-9c7f43279699,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:59.563130 containerd[1464]: time="2025-09-13T00:25:59.562900016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:25:59.565202 containerd[1464]: time="2025-09-13T00:25:59.565138018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:25:59.565414 containerd[1464]: time="2025-09-13T00:25:59.565371658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:59.565714 containerd[1464]: time="2025-09-13T00:25:59.565666410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:59.601955 systemd[1]: Started cri-containerd-ad1a5acebf54f888aa1001cca59942e78f072aefb1493d090b90747bce845dc0.scope - libcontainer container ad1a5acebf54f888aa1001cca59942e78f072aefb1493d090b90747bce845dc0. Sep 13 00:25:59.637096 containerd[1464]: time="2025-09-13T00:25:59.636848821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pqg9v,Uid:6d2afca2-c512-4fd2-93f9-9c7f43279699,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad1a5acebf54f888aa1001cca59942e78f072aefb1493d090b90747bce845dc0\"" Sep 13 00:25:59.643779 containerd[1464]: time="2025-09-13T00:25:59.643703642Z" level=info msg="CreateContainer within sandbox \"ad1a5acebf54f888aa1001cca59942e78f072aefb1493d090b90747bce845dc0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:25:59.654412 containerd[1464]: time="2025-09-13T00:25:59.654358558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5jhtv,Uid:5a10cc79-8fb6-4499-8ff0-21de284332f7,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:25:59.664274 containerd[1464]: time="2025-09-13T00:25:59.664228177Z" level=info msg="CreateContainer within sandbox \"ad1a5acebf54f888aa1001cca59942e78f072aefb1493d090b90747bce845dc0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cffdd6140100c42b643f28ff15bf48b2d9c5f207c3be2ae82638aa7311ef0613\"" Sep 13 00:25:59.665206 containerd[1464]: time="2025-09-13T00:25:59.665172787Z" level=info msg="StartContainer for \"cffdd6140100c42b643f28ff15bf48b2d9c5f207c3be2ae82638aa7311ef0613\"" Sep 13 00:25:59.701831 containerd[1464]: time="2025-09-13T00:25:59.701108834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:25:59.702799 containerd[1464]: time="2025-09-13T00:25:59.702477600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:25:59.702799 containerd[1464]: time="2025-09-13T00:25:59.702623580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:59.703123 containerd[1464]: time="2025-09-13T00:25:59.703065335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:25:59.719986 systemd[1]: Started cri-containerd-cffdd6140100c42b643f28ff15bf48b2d9c5f207c3be2ae82638aa7311ef0613.scope - libcontainer container cffdd6140100c42b643f28ff15bf48b2d9c5f207c3be2ae82638aa7311ef0613. Sep 13 00:25:59.750868 systemd[1]: Started cri-containerd-d2e655ef40b4453318dc13ae72bbf25dd837d8dd9ec153cf916bb86f52f85ad8.scope - libcontainer container d2e655ef40b4453318dc13ae72bbf25dd837d8dd9ec153cf916bb86f52f85ad8. Sep 13 00:25:59.795092 containerd[1464]: time="2025-09-13T00:25:59.794956564Z" level=info msg="StartContainer for \"cffdd6140100c42b643f28ff15bf48b2d9c5f207c3be2ae82638aa7311ef0613\" returns successfully" Sep 13 00:25:59.834957 containerd[1464]: time="2025-09-13T00:25:59.834783574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-5jhtv,Uid:5a10cc79-8fb6-4499-8ff0-21de284332f7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d2e655ef40b4453318dc13ae72bbf25dd837d8dd9ec153cf916bb86f52f85ad8\"" Sep 13 00:25:59.838826 containerd[1464]: time="2025-09-13T00:25:59.838065191Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:26:00.741701 kubelet[2592]: I0913 00:26:00.741404 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pqg9v" podStartSLOduration=1.741380744 podStartE2EDuration="1.741380744s" podCreationTimestamp="2025-09-13 00:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:26:00.741193889 +0000 UTC m=+8.277470575" watchObservedRunningTime="2025-09-13 00:26:00.741380744 +0000 UTC m=+8.277657429" Sep 13 00:26:00.867177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968758437.mount: Deactivated successfully. Sep 13 00:26:02.233373 containerd[1464]: time="2025-09-13T00:26:02.233300073Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:02.234677 containerd[1464]: time="2025-09-13T00:26:02.234570957Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:26:02.236781 containerd[1464]: time="2025-09-13T00:26:02.235713142Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:02.238782 containerd[1464]: time="2025-09-13T00:26:02.238407751Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:02.239780 containerd[1464]: time="2025-09-13T00:26:02.239433664Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.401317975s" Sep 13 00:26:02.239780 containerd[1464]: time="2025-09-13T00:26:02.239480353Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:26:02.243066 containerd[1464]: time="2025-09-13T00:26:02.243028634Z" level=info msg="CreateContainer within sandbox \"d2e655ef40b4453318dc13ae72bbf25dd837d8dd9ec153cf916bb86f52f85ad8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:26:02.268790 containerd[1464]: time="2025-09-13T00:26:02.268665915Z" level=info msg="CreateContainer within sandbox \"d2e655ef40b4453318dc13ae72bbf25dd837d8dd9ec153cf916bb86f52f85ad8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6377e660a37bf4c5f027df9ee0ccb5d437a9dbabc838128c326b583ad522046f\"" Sep 13 00:26:02.269594 containerd[1464]: time="2025-09-13T00:26:02.269499997Z" level=info msg="StartContainer for \"6377e660a37bf4c5f027df9ee0ccb5d437a9dbabc838128c326b583ad522046f\"" Sep 13 00:26:02.316012 systemd[1]: Started cri-containerd-6377e660a37bf4c5f027df9ee0ccb5d437a9dbabc838128c326b583ad522046f.scope - libcontainer container 6377e660a37bf4c5f027df9ee0ccb5d437a9dbabc838128c326b583ad522046f. Sep 13 00:26:02.349182 containerd[1464]: time="2025-09-13T00:26:02.349137339Z" level=info msg="StartContainer for \"6377e660a37bf4c5f027df9ee0ccb5d437a9dbabc838128c326b583ad522046f\" returns successfully" Sep 13 00:26:05.642442 kubelet[2592]: I0913 00:26:05.641985 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-5jhtv" podStartSLOduration=4.237913197 podStartE2EDuration="6.641958799s" podCreationTimestamp="2025-09-13 00:25:59 +0000 UTC" firstStartedPulling="2025-09-13 00:25:59.836931049 +0000 UTC m=+7.373207712" lastFinishedPulling="2025-09-13 00:26:02.240976646 +0000 UTC m=+9.777253314" observedRunningTime="2025-09-13 00:26:02.756119236 +0000 UTC m=+10.292395921" watchObservedRunningTime="2025-09-13 00:26:05.641958799 +0000 UTC m=+13.178235478" Sep 13 00:26:07.407388 sudo[1739]: pam_unix(sudo:session): session closed for user root Sep 13 00:26:07.467784 sshd[1736]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:07.477605 systemd[1]: sshd@8-10.128.0.49:22-147.75.109.163:37816.service: Deactivated successfully. Sep 13 00:26:07.483696 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:26:07.485147 systemd[1]: session-9.scope: Consumed 6.832s CPU time, 158.3M memory peak, 0B memory swap peak. Sep 13 00:26:07.489173 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:26:07.493590 systemd-logind[1446]: Removed session 9. Sep 13 00:26:13.530086 kubelet[2592]: I0913 00:26:13.530038 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2x89\" (UniqueName: \"kubernetes.io/projected/9a1bab69-80bd-4cf6-bf28-f89efea8a669-kube-api-access-v2x89\") pod \"calico-typha-6754cc844f-45p8k\" (UID: \"9a1bab69-80bd-4cf6-bf28-f89efea8a669\") " pod="calico-system/calico-typha-6754cc844f-45p8k" Sep 13 00:26:13.530086 kubelet[2592]: I0913 00:26:13.530099 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1bab69-80bd-4cf6-bf28-f89efea8a669-tigera-ca-bundle\") pod \"calico-typha-6754cc844f-45p8k\" (UID: \"9a1bab69-80bd-4cf6-bf28-f89efea8a669\") " pod="calico-system/calico-typha-6754cc844f-45p8k" Sep 13 00:26:13.530696 kubelet[2592]: I0913 00:26:13.530130 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a1bab69-80bd-4cf6-bf28-f89efea8a669-typha-certs\") pod \"calico-typha-6754cc844f-45p8k\" (UID: \"9a1bab69-80bd-4cf6-bf28-f89efea8a669\") " pod="calico-system/calico-typha-6754cc844f-45p8k" Sep 13 00:26:13.537543 systemd[1]: Created slice kubepods-besteffort-pod9a1bab69_80bd_4cf6_bf28_f89efea8a669.slice - libcontainer container kubepods-besteffort-pod9a1bab69_80bd_4cf6_bf28_f89efea8a669.slice. Sep 13 00:26:13.792041 systemd[1]: Created slice kubepods-besteffort-pod8ea6c40f_eb1e_4738_910f_6e5fe4db9c76.slice - libcontainer container kubepods-besteffort-pod8ea6c40f_eb1e_4738_910f_6e5fe4db9c76.slice. Sep 13 00:26:13.833077 kubelet[2592]: I0913 00:26:13.832565 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-xtables-lock\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833077 kubelet[2592]: I0913 00:26:13.832627 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-cni-log-dir\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833077 kubelet[2592]: I0913 00:26:13.832656 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-cni-bin-dir\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833077 kubelet[2592]: I0913 00:26:13.832682 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-flexvol-driver-host\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833077 kubelet[2592]: I0913 00:26:13.832725 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-cni-net-dir\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833577 kubelet[2592]: I0913 00:26:13.832777 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-policysync\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833577 kubelet[2592]: I0913 00:26:13.832806 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89hx\" (UniqueName: \"kubernetes.io/projected/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-kube-api-access-z89hx\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833577 kubelet[2592]: I0913 00:26:13.832835 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-node-certs\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833577 kubelet[2592]: I0913 00:26:13.832865 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-var-lib-calico\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833577 kubelet[2592]: I0913 00:26:13.832895 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-var-run-calico\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833905 kubelet[2592]: I0913 00:26:13.832927 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-lib-modules\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.833905 kubelet[2592]: I0913 00:26:13.832953 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ea6c40f-eb1e-4738-910f-6e5fe4db9c76-tigera-ca-bundle\") pod \"calico-node-9tthx\" (UID: \"8ea6c40f-eb1e-4738-910f-6e5fe4db9c76\") " pod="calico-system/calico-node-9tthx" Sep 13 00:26:13.850573 containerd[1464]: time="2025-09-13T00:26:13.850519027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6754cc844f-45p8k,Uid:9a1bab69-80bd-4cf6-bf28-f89efea8a669,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:13.899806 containerd[1464]: time="2025-09-13T00:26:13.899488246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:13.899806 containerd[1464]: time="2025-09-13T00:26:13.899556981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:13.899806 containerd[1464]: time="2025-09-13T00:26:13.899586218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:13.900148 containerd[1464]: time="2025-09-13T00:26:13.899696012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:13.939856 kubelet[2592]: E0913 00:26:13.939286 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.939856 kubelet[2592]: W0913 00:26:13.939315 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.939856 kubelet[2592]: E0913 00:26:13.939351 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.941764 kubelet[2592]: E0913 00:26:13.940573 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.941764 kubelet[2592]: W0913 00:26:13.940600 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.941764 kubelet[2592]: E0913 00:26:13.941307 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.941764 kubelet[2592]: W0913 00:26:13.941357 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.941764 kubelet[2592]: E0913 00:26:13.941384 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.941764 kubelet[2592]: E0913 00:26:13.940790 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.943918 kubelet[2592]: E0913 00:26:13.942912 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.943918 kubelet[2592]: W0913 00:26:13.942975 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.943918 kubelet[2592]: E0913 00:26:13.943001 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.943918 kubelet[2592]: E0913 00:26:13.943571 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.943918 kubelet[2592]: W0913 00:26:13.943590 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.943918 kubelet[2592]: E0913 00:26:13.943611 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.948105 systemd[1]: Started cri-containerd-33a13599b74e23f7c62e35dacf02ce7388e0427d99d0b3fa690da1f7486fc92f.scope - libcontainer container 33a13599b74e23f7c62e35dacf02ce7388e0427d99d0b3fa690da1f7486fc92f. Sep 13 00:26:13.950134 kubelet[2592]: E0913 00:26:13.949937 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.950134 kubelet[2592]: W0913 00:26:13.949982 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.953810 kubelet[2592]: E0913 00:26:13.951350 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.953810 kubelet[2592]: W0913 00:26:13.951380 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.953810 kubelet[2592]: E0913 00:26:13.953658 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.953810 kubelet[2592]: W0913 00:26:13.953680 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.953810 kubelet[2592]: E0913 00:26:13.953750 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.955422 kubelet[2592]: E0913 00:26:13.954241 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.955422 kubelet[2592]: E0913 00:26:13.954281 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.955422 kubelet[2592]: E0913 00:26:13.955373 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.955422 kubelet[2592]: W0913 00:26:13.955390 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.955422 kubelet[2592]: E0913 00:26:13.955418 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.956049 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.958628 kubelet[2592]: W0913 00:26:13.956089 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.956129 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.956629 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.958628 kubelet[2592]: W0913 00:26:13.956643 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.956705 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.957305 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.958628 kubelet[2592]: W0913 00:26:13.957321 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.957354 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.958628 kubelet[2592]: E0913 00:26:13.957940 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.961201 kubelet[2592]: W0913 00:26:13.957993 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.961201 kubelet[2592]: E0913 00:26:13.958014 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.961201 kubelet[2592]: E0913 00:26:13.958637 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.961201 kubelet[2592]: W0913 00:26:13.958652 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.961201 kubelet[2592]: E0913 00:26:13.958682 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:13.980865 kubelet[2592]: E0913 00:26:13.980726 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:13.980865 kubelet[2592]: W0913 00:26:13.980791 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:13.980865 kubelet[2592]: E0913 00:26:13.980818 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.043591 kubelet[2592]: E0913 00:26:14.042529 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:14.101201 containerd[1464]: time="2025-09-13T00:26:14.100662806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9tthx,Uid:8ea6c40f-eb1e-4738-910f-6e5fe4db9c76,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:14.133285 kubelet[2592]: E0913 00:26:14.133096 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.133285 kubelet[2592]: W0913 00:26:14.133126 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.133285 kubelet[2592]: E0913 00:26:14.133157 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.135775 kubelet[2592]: E0913 00:26:14.135206 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.135775 kubelet[2592]: W0913 00:26:14.135231 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.135775 kubelet[2592]: E0913 00:26:14.135287 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.137434 kubelet[2592]: E0913 00:26:14.137153 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.137434 kubelet[2592]: W0913 00:26:14.137182 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.137434 kubelet[2592]: E0913 00:26:14.137236 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.138921 kubelet[2592]: E0913 00:26:14.137659 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.138921 kubelet[2592]: W0913 00:26:14.137677 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.138921 kubelet[2592]: E0913 00:26:14.137690 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.138921 kubelet[2592]: E0913 00:26:14.138159 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.138921 kubelet[2592]: W0913 00:26:14.138173 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.138921 kubelet[2592]: E0913 00:26:14.138186 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.139534 kubelet[2592]: E0913 00:26:14.139331 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.139534 kubelet[2592]: W0913 00:26:14.139349 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.139534 kubelet[2592]: E0913 00:26:14.139367 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.140022 kubelet[2592]: E0913 00:26:14.139851 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.140022 kubelet[2592]: W0913 00:26:14.139877 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.140022 kubelet[2592]: E0913 00:26:14.139894 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.141796 kubelet[2592]: E0913 00:26:14.140698 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.141796 kubelet[2592]: W0913 00:26:14.140717 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.141796 kubelet[2592]: E0913 00:26:14.140816 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.142288 kubelet[2592]: E0913 00:26:14.142248 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.142288 kubelet[2592]: W0913 00:26:14.142268 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.142423 kubelet[2592]: E0913 00:26:14.142319 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.142713 kubelet[2592]: E0913 00:26:14.142675 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.142713 kubelet[2592]: W0913 00:26:14.142693 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.142911 kubelet[2592]: E0913 00:26:14.142716 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.143243 kubelet[2592]: E0913 00:26:14.143220 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.143243 kubelet[2592]: W0913 00:26:14.143241 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.143407 kubelet[2592]: E0913 00:26:14.143334 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.144099 kubelet[2592]: E0913 00:26:14.144056 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.144099 kubelet[2592]: W0913 00:26:14.144077 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.144099 kubelet[2592]: E0913 00:26:14.144095 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.144363 kubelet[2592]: I0913 00:26:14.144131 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1027a4-b75f-4b5f-b382-64bdc48ceda4-kubelet-dir\") pod \"csi-node-driver-mnfjg\" (UID: \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\") " pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:14.145290 kubelet[2592]: E0913 00:26:14.145264 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.145290 kubelet[2592]: W0913 00:26:14.145290 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.145467 kubelet[2592]: E0913 00:26:14.145314 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.145467 kubelet[2592]: I0913 00:26:14.145345 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f1027a4-b75f-4b5f-b382-64bdc48ceda4-registration-dir\") pod \"csi-node-driver-mnfjg\" (UID: \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\") " pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:14.145777 kubelet[2592]: E0913 00:26:14.145722 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.145777 kubelet[2592]: W0913 00:26:14.145755 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.146507 kubelet[2592]: E0913 00:26:14.146468 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.146940 kubelet[2592]: E0913 00:26:14.146895 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.147033 kubelet[2592]: W0913 00:26:14.146984 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.147105 kubelet[2592]: E0913 00:26:14.147047 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.147465 kubelet[2592]: E0913 00:26:14.147443 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.147465 kubelet[2592]: W0913 00:26:14.147463 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.147877 kubelet[2592]: E0913 00:26:14.147851 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.148394 kubelet[2592]: E0913 00:26:14.148372 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.148394 kubelet[2592]: W0913 00:26:14.148394 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.148566 kubelet[2592]: E0913 00:26:14.148418 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.150439 kubelet[2592]: E0913 00:26:14.149850 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.150439 kubelet[2592]: W0913 00:26:14.149874 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.150439 kubelet[2592]: E0913 00:26:14.149967 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.150439 kubelet[2592]: E0913 00:26:14.150229 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.150439 kubelet[2592]: W0913 00:26:14.150242 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.150439 kubelet[2592]: E0913 00:26:14.150256 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.150871 kubelet[2592]: E0913 00:26:14.150557 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.150871 kubelet[2592]: W0913 00:26:14.150571 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.150871 kubelet[2592]: E0913 00:26:14.150589 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.151083 kubelet[2592]: E0913 00:26:14.150935 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.151083 kubelet[2592]: W0913 00:26:14.150949 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.151083 kubelet[2592]: E0913 00:26:14.150964 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.151630 kubelet[2592]: E0913 00:26:14.151487 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.151630 kubelet[2592]: W0913 00:26:14.151616 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.151798 kubelet[2592]: E0913 00:26:14.151635 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.152442 kubelet[2592]: E0913 00:26:14.152395 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.152442 kubelet[2592]: W0913 00:26:14.152415 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.152580 kubelet[2592]: E0913 00:26:14.152432 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.153227 kubelet[2592]: E0913 00:26:14.153186 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.153227 kubelet[2592]: W0913 00:26:14.153205 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.153227 kubelet[2592]: E0913 00:26:14.153223 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.154189 kubelet[2592]: E0913 00:26:14.153936 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.154189 kubelet[2592]: W0913 00:26:14.154050 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.154189 kubelet[2592]: E0913 00:26:14.154070 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.154956 kubelet[2592]: E0913 00:26:14.154857 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.154956 kubelet[2592]: W0913 00:26:14.154941 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.155131 kubelet[2592]: E0913 00:26:14.154964 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.156704 containerd[1464]: time="2025-09-13T00:26:14.156561525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6754cc844f-45p8k,Uid:9a1bab69-80bd-4cf6-bf28-f89efea8a669,Namespace:calico-system,Attempt:0,} returns sandbox id \"33a13599b74e23f7c62e35dacf02ce7388e0427d99d0b3fa690da1f7486fc92f\"" Sep 13 00:26:14.159619 containerd[1464]: time="2025-09-13T00:26:14.159579145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:26:14.177188 containerd[1464]: time="2025-09-13T00:26:14.177087183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:14.177565 containerd[1464]: time="2025-09-13T00:26:14.177398046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:14.177565 containerd[1464]: time="2025-09-13T00:26:14.177509599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:14.179773 containerd[1464]: time="2025-09-13T00:26:14.178652623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:14.218048 systemd[1]: Started cri-containerd-31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3.scope - libcontainer container 31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3. Sep 13 00:26:14.247461 kubelet[2592]: E0913 00:26:14.247407 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.247461 kubelet[2592]: W0913 00:26:14.247443 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.247704 kubelet[2592]: E0913 00:26:14.247476 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.249701 kubelet[2592]: E0913 00:26:14.249125 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.249701 kubelet[2592]: W0913 00:26:14.249155 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.249701 kubelet[2592]: E0913 00:26:14.249201 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.249701 kubelet[2592]: I0913 00:26:14.249371 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrjt\" (UniqueName: \"kubernetes.io/projected/7f1027a4-b75f-4b5f-b382-64bdc48ceda4-kube-api-access-6lrjt\") pod \"csi-node-driver-mnfjg\" (UID: \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\") " pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:14.249701 kubelet[2592]: E0913 00:26:14.249642 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.249701 kubelet[2592]: W0913 00:26:14.249668 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.249701 kubelet[2592]: E0913 00:26:14.249703 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.250084 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.253472 kubelet[2592]: W0913 00:26:14.250099 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.250135 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.250473 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.253472 kubelet[2592]: W0913 00:26:14.250487 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.250519 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.250905 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.253472 kubelet[2592]: W0913 00:26:14.250921 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.250958 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.253472 kubelet[2592]: E0913 00:26:14.251519 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.254292 kubelet[2592]: W0913 00:26:14.251534 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.254292 kubelet[2592]: E0913 00:26:14.251784 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.254292 kubelet[2592]: I0913 00:26:14.251826 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f1027a4-b75f-4b5f-b382-64bdc48ceda4-socket-dir\") pod \"csi-node-driver-mnfjg\" (UID: \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\") " pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:14.254292 kubelet[2592]: E0913 00:26:14.252193 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.254292 kubelet[2592]: W0913 00:26:14.252207 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.254292 kubelet[2592]: E0913 00:26:14.252515 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.254292 kubelet[2592]: E0913 00:26:14.252908 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.254292 kubelet[2592]: W0913 00:26:14.252923 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.254292 kubelet[2592]: E0913 00:26:14.253145 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.255494 kubelet[2592]: E0913 00:26:14.253586 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.255494 kubelet[2592]: W0913 00:26:14.253602 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.255494 kubelet[2592]: E0913 00:26:14.253825 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.255494 kubelet[2592]: E0913 00:26:14.254210 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.255494 kubelet[2592]: W0913 00:26:14.254227 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.255494 kubelet[2592]: E0913 00:26:14.254354 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.255494 kubelet[2592]: I0913 00:26:14.254388 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f1027a4-b75f-4b5f-b382-64bdc48ceda4-varrun\") pod \"csi-node-driver-mnfjg\" (UID: \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\") " pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:14.255494 kubelet[2592]: E0913 00:26:14.254677 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.255494 kubelet[2592]: W0913 00:26:14.254692 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.254826 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.255410 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.258028 kubelet[2592]: W0913 00:26:14.255427 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.255445 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.256463 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.258028 kubelet[2592]: W0913 00:26:14.256479 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.256514 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.257087 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.258028 kubelet[2592]: W0913 00:26:14.257103 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.258028 kubelet[2592]: E0913 00:26:14.257166 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.259927 kubelet[2592]: E0913 00:26:14.257783 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.259927 kubelet[2592]: W0913 00:26:14.257799 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.259927 kubelet[2592]: E0913 00:26:14.257834 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.259927 kubelet[2592]: E0913 00:26:14.258314 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.259927 kubelet[2592]: W0913 00:26:14.258329 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.259927 kubelet[2592]: E0913 00:26:14.258393 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.259927 kubelet[2592]: E0913 00:26:14.259077 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.259927 kubelet[2592]: W0913 00:26:14.259492 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.259927 kubelet[2592]: E0913 00:26:14.259610 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.261152 kubelet[2592]: E0913 00:26:14.259956 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.261152 kubelet[2592]: W0913 00:26:14.259970 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.261152 kubelet[2592]: E0913 00:26:14.259988 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.312354 containerd[1464]: time="2025-09-13T00:26:14.312004248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9tthx,Uid:8ea6c40f-eb1e-4738-910f-6e5fe4db9c76,Namespace:calico-system,Attempt:0,} returns sandbox id \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\"" Sep 13 00:26:14.357471 kubelet[2592]: E0913 00:26:14.357431 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.357471 kubelet[2592]: W0913 00:26:14.357465 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.358115 kubelet[2592]: E0913 00:26:14.357495 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.358115 kubelet[2592]: E0913 00:26:14.358008 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.358115 kubelet[2592]: W0913 00:26:14.358024 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.358115 kubelet[2592]: E0913 00:26:14.358063 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.359502 kubelet[2592]: E0913 00:26:14.358860 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.359502 kubelet[2592]: W0913 00:26:14.358878 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.359502 kubelet[2592]: E0913 00:26:14.358903 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.360105 kubelet[2592]: E0913 00:26:14.360081 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.360105 kubelet[2592]: W0913 00:26:14.360103 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.360544 kubelet[2592]: E0913 00:26:14.360200 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.361024 kubelet[2592]: E0913 00:26:14.360951 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.361024 kubelet[2592]: W0913 00:26:14.360992 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.361528 kubelet[2592]: E0913 00:26:14.361101 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.361888 kubelet[2592]: E0913 00:26:14.361793 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.361888 kubelet[2592]: W0913 00:26:14.361824 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.362274 kubelet[2592]: E0913 00:26:14.362089 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.362839 kubelet[2592]: E0913 00:26:14.362783 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.362839 kubelet[2592]: W0913 00:26:14.362803 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.363245 kubelet[2592]: E0913 00:26:14.363118 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.363930 kubelet[2592]: E0913 00:26:14.363705 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.363930 kubelet[2592]: W0913 00:26:14.363726 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.364207 kubelet[2592]: E0913 00:26:14.364154 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.364618 kubelet[2592]: E0913 00:26:14.364385 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.364618 kubelet[2592]: W0913 00:26:14.364403 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.365394 kubelet[2592]: E0913 00:26:14.365357 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.366993 kubelet[2592]: E0913 00:26:14.366875 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.366993 kubelet[2592]: W0913 00:26:14.366895 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.367553 kubelet[2592]: E0913 00:26:14.367139 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.368135 kubelet[2592]: E0913 00:26:14.368069 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.368301 kubelet[2592]: W0913 00:26:14.368265 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.368677 kubelet[2592]: E0913 00:26:14.368630 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.369210 kubelet[2592]: E0913 00:26:14.369182 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.369210 kubelet[2592]: W0913 00:26:14.369204 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.369210 kubelet[2592]: E0913 00:26:14.369230 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.370187 kubelet[2592]: E0913 00:26:14.370166 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.370187 kubelet[2592]: W0913 00:26:14.370186 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.370566 kubelet[2592]: E0913 00:26:14.370334 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.372122 kubelet[2592]: E0913 00:26:14.372084 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.372122 kubelet[2592]: W0913 00:26:14.372105 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.372122 kubelet[2592]: E0913 00:26:14.372124 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.372518 kubelet[2592]: E0913 00:26:14.372498 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.372518 kubelet[2592]: W0913 00:26:14.372516 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.372723 kubelet[2592]: E0913 00:26:14.372533 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:14.380881 kubelet[2592]: E0913 00:26:14.380844 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:14.380881 kubelet[2592]: W0913 00:26:14.380865 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:14.380881 kubelet[2592]: E0913 00:26:14.380884 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:15.170493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700727533.mount: Deactivated successfully. Sep 13 00:26:15.665609 kubelet[2592]: E0913 00:26:15.664455 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:16.475420 containerd[1464]: time="2025-09-13T00:26:16.475359831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:16.476906 containerd[1464]: time="2025-09-13T00:26:16.476710265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:26:16.477878 containerd[1464]: time="2025-09-13T00:26:16.477836789Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:16.481694 containerd[1464]: time="2025-09-13T00:26:16.480676935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:16.481694 containerd[1464]: time="2025-09-13T00:26:16.481526824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.321896266s" Sep 13 00:26:16.481694 containerd[1464]: time="2025-09-13T00:26:16.481569030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:26:16.483302 containerd[1464]: time="2025-09-13T00:26:16.483119026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:26:16.499140 containerd[1464]: time="2025-09-13T00:26:16.499098766Z" level=info msg="CreateContainer within sandbox \"33a13599b74e23f7c62e35dacf02ce7388e0427d99d0b3fa690da1f7486fc92f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:26:16.524010 containerd[1464]: time="2025-09-13T00:26:16.523958460Z" level=info msg="CreateContainer within sandbox \"33a13599b74e23f7c62e35dacf02ce7388e0427d99d0b3fa690da1f7486fc92f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"05f9dbd1d984d6e14d9920138cf51f20b923ad806584ebd715d8adad2ab682f7\"" Sep 13 00:26:16.526033 containerd[1464]: time="2025-09-13T00:26:16.525993630Z" level=info msg="StartContainer for \"05f9dbd1d984d6e14d9920138cf51f20b923ad806584ebd715d8adad2ab682f7\"" Sep 13 00:26:16.570976 systemd[1]: Started cri-containerd-05f9dbd1d984d6e14d9920138cf51f20b923ad806584ebd715d8adad2ab682f7.scope - libcontainer container 05f9dbd1d984d6e14d9920138cf51f20b923ad806584ebd715d8adad2ab682f7. Sep 13 00:26:16.640692 containerd[1464]: time="2025-09-13T00:26:16.640637250Z" level=info msg="StartContainer for \"05f9dbd1d984d6e14d9920138cf51f20b923ad806584ebd715d8adad2ab682f7\" returns successfully" Sep 13 00:26:16.875291 kubelet[2592]: E0913 00:26:16.875243 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.875916 kubelet[2592]: W0913 00:26:16.875299 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.875916 kubelet[2592]: E0913 00:26:16.875332 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.875916 kubelet[2592]: E0913 00:26:16.875877 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.875916 kubelet[2592]: W0913 00:26:16.875894 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.876170 kubelet[2592]: E0913 00:26:16.875919 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.876304 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.877780 kubelet[2592]: W0913 00:26:16.876329 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.876348 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.876869 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.877780 kubelet[2592]: W0913 00:26:16.876902 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.876920 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.877296 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.877780 kubelet[2592]: W0913 00:26:16.877327 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.877344 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.877780 kubelet[2592]: E0913 00:26:16.877676 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.878477 kubelet[2592]: W0913 00:26:16.877690 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.878477 kubelet[2592]: E0913 00:26:16.877707 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.878477 kubelet[2592]: E0913 00:26:16.878058 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.878477 kubelet[2592]: W0913 00:26:16.878090 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.878477 kubelet[2592]: E0913 00:26:16.878108 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.878724 kubelet[2592]: E0913 00:26:16.878616 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.878724 kubelet[2592]: W0913 00:26:16.878630 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.878724 kubelet[2592]: E0913 00:26:16.878648 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.880767 kubelet[2592]: E0913 00:26:16.879083 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.880767 kubelet[2592]: W0913 00:26:16.879102 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.880767 kubelet[2592]: E0913 00:26:16.879120 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.880767 kubelet[2592]: E0913 00:26:16.879872 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.880767 kubelet[2592]: W0913 00:26:16.879890 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.880767 kubelet[2592]: E0913 00:26:16.880112 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.880767 kubelet[2592]: E0913 00:26:16.880450 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.880767 kubelet[2592]: W0913 00:26:16.880464 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.880767 kubelet[2592]: E0913 00:26:16.880481 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.881300 kubelet[2592]: E0913 00:26:16.880805 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.881300 kubelet[2592]: W0913 00:26:16.880819 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.881300 kubelet[2592]: E0913 00:26:16.880835 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.881300 kubelet[2592]: E0913 00:26:16.881222 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.881300 kubelet[2592]: W0913 00:26:16.881236 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.881300 kubelet[2592]: E0913 00:26:16.881252 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.881605 kubelet[2592]: E0913 00:26:16.881550 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.881605 kubelet[2592]: W0913 00:26:16.881563 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.881605 kubelet[2592]: E0913 00:26:16.881577 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.881914 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.883775 kubelet[2592]: W0913 00:26:16.881931 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.881947 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.882361 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.883775 kubelet[2592]: W0913 00:26:16.882375 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.882391 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.882780 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.883775 kubelet[2592]: W0913 00:26:16.882795 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.882828 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.883775 kubelet[2592]: E0913 00:26:16.883194 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.884390 kubelet[2592]: W0913 00:26:16.883207 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.884390 kubelet[2592]: E0913 00:26:16.883248 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.884390 kubelet[2592]: E0913 00:26:16.883573 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.884390 kubelet[2592]: W0913 00:26:16.883587 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.884390 kubelet[2592]: E0913 00:26:16.883603 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.884390 kubelet[2592]: E0913 00:26:16.883942 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.884390 kubelet[2592]: W0913 00:26:16.883957 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.884390 kubelet[2592]: E0913 00:26:16.883973 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.884390 kubelet[2592]: E0913 00:26:16.884306 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.884390 kubelet[2592]: W0913 00:26:16.884320 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.884965 kubelet[2592]: E0913 00:26:16.884351 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.884965 kubelet[2592]: E0913 00:26:16.884775 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.884965 kubelet[2592]: W0913 00:26:16.884791 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.884965 kubelet[2592]: E0913 00:26:16.884825 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.885220 kubelet[2592]: E0913 00:26:16.885209 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.885289 kubelet[2592]: W0913 00:26:16.885223 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.885289 kubelet[2592]: E0913 00:26:16.885240 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.886341 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.887761 kubelet[2592]: W0913 00:26:16.886361 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.886379 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.886715 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.887761 kubelet[2592]: W0913 00:26:16.886731 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.886849 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.887116 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.887761 kubelet[2592]: W0913 00:26:16.887129 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.887243 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.887761 kubelet[2592]: E0913 00:26:16.887481 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.888318 kubelet[2592]: W0913 00:26:16.887493 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.888318 kubelet[2592]: E0913 00:26:16.887629 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.888318 kubelet[2592]: E0913 00:26:16.887912 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.888318 kubelet[2592]: W0913 00:26:16.887926 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.888318 kubelet[2592]: E0913 00:26:16.887948 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.888575 kubelet[2592]: E0913 00:26:16.888481 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.888575 kubelet[2592]: W0913 00:26:16.888496 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.888575 kubelet[2592]: E0913 00:26:16.888528 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.888959 kubelet[2592]: E0913 00:26:16.888925 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.888959 kubelet[2592]: W0913 00:26:16.888957 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.889116 kubelet[2592]: E0913 00:26:16.888974 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.889317 kubelet[2592]: E0913 00:26:16.889295 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.889317 kubelet[2592]: W0913 00:26:16.889317 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.889455 kubelet[2592]: E0913 00:26:16.889334 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.890788 kubelet[2592]: E0913 00:26:16.890017 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.890788 kubelet[2592]: W0913 00:26:16.890037 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.890788 kubelet[2592]: E0913 00:26:16.890055 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:16.910468 kubelet[2592]: E0913 00:26:16.910431 2592 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:26:16.910468 kubelet[2592]: W0913 00:26:16.910464 2592 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:26:16.910716 kubelet[2592]: E0913 00:26:16.910488 2592 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:26:17.471593 containerd[1464]: time="2025-09-13T00:26:17.471524804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:17.472795 containerd[1464]: time="2025-09-13T00:26:17.472720063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:26:17.474900 containerd[1464]: time="2025-09-13T00:26:17.474677727Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:17.482099 containerd[1464]: time="2025-09-13T00:26:17.482061055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:17.485915 containerd[1464]: time="2025-09-13T00:26:17.484810023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.001537187s" Sep 13 00:26:17.485915 containerd[1464]: time="2025-09-13T00:26:17.484895602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:26:17.491537 containerd[1464]: time="2025-09-13T00:26:17.491495393Z" level=info msg="CreateContainer within sandbox \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:26:17.519521 containerd[1464]: time="2025-09-13T00:26:17.519402001Z" level=info msg="CreateContainer within sandbox \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd\"" Sep 13 00:26:17.520892 containerd[1464]: time="2025-09-13T00:26:17.520829857Z" level=info msg="StartContainer for \"e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd\"" Sep 13 00:26:17.600959 systemd[1]: Started cri-containerd-e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd.scope - libcontainer container e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd. Sep 13 00:26:17.664785 kubelet[2592]: E0913 00:26:17.664090 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:17.687460 containerd[1464]: time="2025-09-13T00:26:17.687322443Z" level=info msg="StartContainer for \"e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd\" returns successfully" Sep 13 00:26:17.722549 systemd[1]: cri-containerd-e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd.scope: Deactivated successfully. Sep 13 00:26:17.792871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd-rootfs.mount: Deactivated successfully. Sep 13 00:26:17.813256 kubelet[2592]: I0913 00:26:17.793384 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:26:17.848415 kubelet[2592]: I0913 00:26:17.847540 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6754cc844f-45p8k" podStartSLOduration=2.523662102 podStartE2EDuration="4.84751817s" podCreationTimestamp="2025-09-13 00:26:13 +0000 UTC" firstStartedPulling="2025-09-13 00:26:14.159032524 +0000 UTC m=+21.695309200" lastFinishedPulling="2025-09-13 00:26:16.482888591 +0000 UTC m=+24.019165268" observedRunningTime="2025-09-13 00:26:16.810443593 +0000 UTC m=+24.346720282" watchObservedRunningTime="2025-09-13 00:26:17.84751817 +0000 UTC m=+25.383794854" Sep 13 00:26:18.470235 containerd[1464]: time="2025-09-13T00:26:18.470133185Z" level=info msg="shim disconnected" id=e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd namespace=k8s.io Sep 13 00:26:18.470235 containerd[1464]: time="2025-09-13T00:26:18.470207070Z" level=warning msg="cleaning up after shim disconnected" id=e53d79c7bbf0e291da6a33158acc6e75e7647f667e5883a54ad5069c276badbd namespace=k8s.io Sep 13 00:26:18.470235 containerd[1464]: time="2025-09-13T00:26:18.470223118Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:26:18.806498 containerd[1464]: time="2025-09-13T00:26:18.805652557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:26:19.664972 kubelet[2592]: E0913 00:26:19.664881 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:21.664435 kubelet[2592]: E0913 00:26:21.664360 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:22.220650 containerd[1464]: time="2025-09-13T00:26:22.220579178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:22.222183 containerd[1464]: time="2025-09-13T00:26:22.221924632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:26:22.223684 containerd[1464]: time="2025-09-13T00:26:22.223240333Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:22.226820 containerd[1464]: time="2025-09-13T00:26:22.226786710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:22.227498 containerd[1464]: time="2025-09-13T00:26:22.227456438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.421740325s" Sep 13 00:26:22.227593 containerd[1464]: time="2025-09-13T00:26:22.227503956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:26:22.231236 containerd[1464]: time="2025-09-13T00:26:22.231201880Z" level=info msg="CreateContainer within sandbox \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:26:22.253675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700988394.mount: Deactivated successfully. Sep 13 00:26:22.255975 containerd[1464]: time="2025-09-13T00:26:22.255919705Z" level=info msg="CreateContainer within sandbox \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148\"" Sep 13 00:26:22.257122 containerd[1464]: time="2025-09-13T00:26:22.257083153Z" level=info msg="StartContainer for \"c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148\"" Sep 13 00:26:22.303029 systemd[1]: run-containerd-runc-k8s.io-c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148-runc.ssiSpM.mount: Deactivated successfully. Sep 13 00:26:22.312936 systemd[1]: Started cri-containerd-c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148.scope - libcontainer container c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148. Sep 13 00:26:22.355304 containerd[1464]: time="2025-09-13T00:26:22.355255184Z" level=info msg="StartContainer for \"c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148\" returns successfully" Sep 13 00:26:23.339395 containerd[1464]: time="2025-09-13T00:26:23.339335469Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:26:23.342866 systemd[1]: cri-containerd-c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148.scope: Deactivated successfully. Sep 13 00:26:23.378980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148-rootfs.mount: Deactivated successfully. Sep 13 00:26:23.411013 kubelet[2592]: I0913 00:26:23.410912 2592 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:26:23.465923 systemd[1]: Created slice kubepods-burstable-pod1340fb7a_e0db_4806_8c10_89545a7ba6fe.slice - libcontainer container kubepods-burstable-pod1340fb7a_e0db_4806_8c10_89545a7ba6fe.slice. Sep 13 00:26:23.493244 systemd[1]: Created slice kubepods-burstable-podc497ee4f_d0c8_467d_9216_5d2e88dee8c7.slice - libcontainer container kubepods-burstable-podc497ee4f_d0c8_467d_9216_5d2e88dee8c7.slice. Sep 13 00:26:23.514449 systemd[1]: Created slice kubepods-besteffort-podd1fcfa74_5cf5_4886_8c7c_add2cf297c71.slice - libcontainer container kubepods-besteffort-podd1fcfa74_5cf5_4886_8c7c_add2cf297c71.slice. Sep 13 00:26:23.526915 systemd[1]: Created slice kubepods-besteffort-pod305f5d4c_806f_42ee_82af_c8e6eacc2f74.slice - libcontainer container kubepods-besteffort-pod305f5d4c_806f_42ee_82af_c8e6eacc2f74.slice. Sep 13 00:26:23.530774 kubelet[2592]: W0913 00:26:23.530409 2592 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' and this object Sep 13 00:26:23.530774 kubelet[2592]: E0913 00:26:23.530469 2592 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' and this object" logger="UnhandledError" Sep 13 00:26:23.536025 kubelet[2592]: W0913 00:26:23.533066 2592 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' and this object Sep 13 00:26:23.536025 kubelet[2592]: E0913 00:26:23.533122 2592 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' and this object" logger="UnhandledError" Sep 13 00:26:23.541214 kubelet[2592]: W0913 00:26:23.541140 2592 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' and this object Sep 13 00:26:23.541214 kubelet[2592]: E0913 00:26:23.541206 2592 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' and this object" logger="UnhandledError" Sep 13 00:26:23.560389 systemd[1]: Created slice kubepods-besteffort-pod5ebdab1b_1f4e_4fa2_ba06_30bf94a930b9.slice - libcontainer container kubepods-besteffort-pod5ebdab1b_1f4e_4fa2_ba06_30bf94a930b9.slice. Sep 13 00:26:23.576268 systemd[1]: Created slice kubepods-besteffort-pod797236c0_8833_48f0_9caa_e94747de2be8.slice - libcontainer container kubepods-besteffort-pod797236c0_8833_48f0_9caa_e94747de2be8.slice. Sep 13 00:26:23.596259 systemd[1]: Created slice kubepods-besteffort-pode0ab9580_fead_4e9f_abaa_e8f12c4c312a.slice - libcontainer container kubepods-besteffort-pode0ab9580_fead_4e9f_abaa_e8f12c4c312a.slice. Sep 13 00:26:23.645517 kubelet[2592]: I0913 00:26:23.645468 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrdx\" (UniqueName: \"kubernetes.io/projected/d1fcfa74-5cf5-4886-8c7c-add2cf297c71-kube-api-access-rbrdx\") pod \"goldmane-7988f88666-h77dv\" (UID: \"d1fcfa74-5cf5-4886-8c7c-add2cf297c71\") " pod="calico-system/goldmane-7988f88666-h77dv" Sep 13 00:26:23.646753 kubelet[2592]: I0913 00:26:23.645957 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dnhx\" (UniqueName: \"kubernetes.io/projected/797236c0-8833-48f0-9caa-e94747de2be8-kube-api-access-8dnhx\") pod \"whisker-c6f9fbd87-ffq8d\" (UID: \"797236c0-8833-48f0-9caa-e94747de2be8\") " pod="calico-system/whisker-c6f9fbd87-ffq8d" Sep 13 00:26:23.646753 kubelet[2592]: I0913 00:26:23.646045 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9nb\" (UniqueName: \"kubernetes.io/projected/1340fb7a-e0db-4806-8c10-89545a7ba6fe-kube-api-access-2p9nb\") pod \"coredns-7c65d6cfc9-94fww\" (UID: \"1340fb7a-e0db-4806-8c10-89545a7ba6fe\") " pod="kube-system/coredns-7c65d6cfc9-94fww" Sep 13 00:26:23.646753 kubelet[2592]: I0913 00:26:23.646104 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1fcfa74-5cf5-4886-8c7c-add2cf297c71-goldmane-key-pair\") pod \"goldmane-7988f88666-h77dv\" (UID: \"d1fcfa74-5cf5-4886-8c7c-add2cf297c71\") " pod="calico-system/goldmane-7988f88666-h77dv" Sep 13 00:26:23.646753 kubelet[2592]: I0913 00:26:23.646136 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h75g\" (UniqueName: \"kubernetes.io/projected/305f5d4c-806f-42ee-82af-c8e6eacc2f74-kube-api-access-9h75g\") pod \"calico-apiserver-54854fc87c-vb86r\" (UID: \"305f5d4c-806f-42ee-82af-c8e6eacc2f74\") " pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" Sep 13 00:26:23.646753 kubelet[2592]: I0913 00:26:23.646168 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th9wh\" (UniqueName: \"kubernetes.io/projected/5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9-kube-api-access-th9wh\") pod \"calico-kube-controllers-7f6bc978d5-r28xt\" (UID: \"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9\") " pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" Sep 13 00:26:23.647092 kubelet[2592]: I0913 00:26:23.646202 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npndz\" (UniqueName: \"kubernetes.io/projected/c497ee4f-d0c8-467d-9216-5d2e88dee8c7-kube-api-access-npndz\") pod \"coredns-7c65d6cfc9-hj2qq\" (UID: \"c497ee4f-d0c8-467d-9216-5d2e88dee8c7\") " pod="kube-system/coredns-7c65d6cfc9-hj2qq" Sep 13 00:26:23.647092 kubelet[2592]: I0913 00:26:23.646246 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcgkc\" (UniqueName: \"kubernetes.io/projected/e0ab9580-fead-4e9f-abaa-e8f12c4c312a-kube-api-access-rcgkc\") pod \"calico-apiserver-54854fc87c-64p78\" (UID: \"e0ab9580-fead-4e9f-abaa-e8f12c4c312a\") " pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" Sep 13 00:26:23.647092 kubelet[2592]: I0913 00:26:23.646278 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fcfa74-5cf5-4886-8c7c-add2cf297c71-config\") pod \"goldmane-7988f88666-h77dv\" (UID: \"d1fcfa74-5cf5-4886-8c7c-add2cf297c71\") " pod="calico-system/goldmane-7988f88666-h77dv" Sep 13 00:26:23.647092 kubelet[2592]: I0913 00:26:23.646306 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/305f5d4c-806f-42ee-82af-c8e6eacc2f74-calico-apiserver-certs\") pod \"calico-apiserver-54854fc87c-vb86r\" (UID: \"305f5d4c-806f-42ee-82af-c8e6eacc2f74\") " pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" Sep 13 00:26:23.647092 kubelet[2592]: I0913 00:26:23.646335 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1fcfa74-5cf5-4886-8c7c-add2cf297c71-goldmane-ca-bundle\") pod \"goldmane-7988f88666-h77dv\" (UID: \"d1fcfa74-5cf5-4886-8c7c-add2cf297c71\") " pod="calico-system/goldmane-7988f88666-h77dv" Sep 13 00:26:23.648289 kubelet[2592]: I0913 00:26:23.646420 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0ab9580-fead-4e9f-abaa-e8f12c4c312a-calico-apiserver-certs\") pod \"calico-apiserver-54854fc87c-64p78\" (UID: \"e0ab9580-fead-4e9f-abaa-e8f12c4c312a\") " pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" Sep 13 00:26:23.648289 kubelet[2592]: I0913 00:26:23.646457 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1340fb7a-e0db-4806-8c10-89545a7ba6fe-config-volume\") pod \"coredns-7c65d6cfc9-94fww\" (UID: \"1340fb7a-e0db-4806-8c10-89545a7ba6fe\") " pod="kube-system/coredns-7c65d6cfc9-94fww" Sep 13 00:26:23.648289 kubelet[2592]: I0913 00:26:23.646496 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/797236c0-8833-48f0-9caa-e94747de2be8-whisker-backend-key-pair\") pod \"whisker-c6f9fbd87-ffq8d\" (UID: \"797236c0-8833-48f0-9caa-e94747de2be8\") " pod="calico-system/whisker-c6f9fbd87-ffq8d" Sep 13 00:26:23.648289 kubelet[2592]: I0913 00:26:23.646526 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/797236c0-8833-48f0-9caa-e94747de2be8-whisker-ca-bundle\") pod \"whisker-c6f9fbd87-ffq8d\" (UID: \"797236c0-8833-48f0-9caa-e94747de2be8\") " pod="calico-system/whisker-c6f9fbd87-ffq8d" Sep 13 00:26:23.648289 kubelet[2592]: I0913 00:26:23.646558 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9-tigera-ca-bundle\") pod \"calico-kube-controllers-7f6bc978d5-r28xt\" (UID: \"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9\") " pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" Sep 13 00:26:23.648455 kubelet[2592]: I0913 00:26:23.646588 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c497ee4f-d0c8-467d-9216-5d2e88dee8c7-config-volume\") pod \"coredns-7c65d6cfc9-hj2qq\" (UID: \"c497ee4f-d0c8-467d-9216-5d2e88dee8c7\") " pod="kube-system/coredns-7c65d6cfc9-hj2qq" Sep 13 00:26:23.676365 systemd[1]: Created slice kubepods-besteffort-pod7f1027a4_b75f_4b5f_b382_64bdc48ceda4.slice - libcontainer container kubepods-besteffort-pod7f1027a4_b75f_4b5f_b382_64bdc48ceda4.slice. Sep 13 00:26:23.680425 containerd[1464]: time="2025-09-13T00:26:23.679946935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnfjg,Uid:7f1027a4-b75f-4b5f-b382-64bdc48ceda4,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:23.852090 containerd[1464]: time="2025-09-13T00:26:23.851951808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-vb86r,Uid:305f5d4c-806f-42ee-82af-c8e6eacc2f74,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:26:23.870455 containerd[1464]: time="2025-09-13T00:26:23.870406010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6bc978d5-r28xt,Uid:5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:23.889063 containerd[1464]: time="2025-09-13T00:26:23.888937754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c6f9fbd87-ffq8d,Uid:797236c0-8833-48f0-9caa-e94747de2be8,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:23.904843 containerd[1464]: time="2025-09-13T00:26:23.904782014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-64p78,Uid:e0ab9580-fead-4e9f-abaa-e8f12c4c312a,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:26:23.939833 containerd[1464]: time="2025-09-13T00:26:23.939520139Z" level=info msg="shim disconnected" id=c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148 namespace=k8s.io Sep 13 00:26:23.939833 containerd[1464]: time="2025-09-13T00:26:23.939621935Z" level=warning msg="cleaning up after shim disconnected" id=c6748cd8e5f7998921437baeb0c9eefd99028a43971cfe34d6fa5aa83de4c148 namespace=k8s.io Sep 13 00:26:23.939833 containerd[1464]: time="2025-09-13T00:26:23.939638440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:26:24.084798 containerd[1464]: time="2025-09-13T00:26:24.084705293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-94fww,Uid:1340fb7a-e0db-4806-8c10-89545a7ba6fe,Namespace:kube-system,Attempt:0,}" Sep 13 00:26:24.102218 containerd[1464]: time="2025-09-13T00:26:24.101855360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hj2qq,Uid:c497ee4f-d0c8-467d-9216-5d2e88dee8c7,Namespace:kube-system,Attempt:0,}" Sep 13 00:26:24.265722 containerd[1464]: time="2025-09-13T00:26:24.265638083Z" level=error msg="Failed to destroy network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.266429 containerd[1464]: time="2025-09-13T00:26:24.266244170Z" level=error msg="encountered an error cleaning up failed sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.266791 containerd[1464]: time="2025-09-13T00:26:24.266468415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-vb86r,Uid:305f5d4c-806f-42ee-82af-c8e6eacc2f74,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.267093 kubelet[2592]: E0913 00:26:24.266936 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.267713 kubelet[2592]: E0913 00:26:24.267115 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" Sep 13 00:26:24.267713 kubelet[2592]: E0913 00:26:24.267241 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" Sep 13 00:26:24.267713 kubelet[2592]: E0913 00:26:24.267340 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54854fc87c-vb86r_calico-apiserver(305f5d4c-806f-42ee-82af-c8e6eacc2f74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54854fc87c-vb86r_calico-apiserver(305f5d4c-806f-42ee-82af-c8e6eacc2f74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" podUID="305f5d4c-806f-42ee-82af-c8e6eacc2f74" Sep 13 00:26:24.321860 containerd[1464]: time="2025-09-13T00:26:24.321717904Z" level=error msg="Failed to destroy network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.322411 containerd[1464]: time="2025-09-13T00:26:24.322092101Z" level=error msg="Failed to destroy network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.322527 containerd[1464]: time="2025-09-13T00:26:24.322359997Z" level=error msg="encountered an error cleaning up failed sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.322629 containerd[1464]: time="2025-09-13T00:26:24.322566428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hj2qq,Uid:c497ee4f-d0c8-467d-9216-5d2e88dee8c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.323803 containerd[1464]: time="2025-09-13T00:26:24.322863152Z" level=error msg="encountered an error cleaning up failed sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.323803 containerd[1464]: time="2025-09-13T00:26:24.322927261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6bc978d5-r28xt,Uid:5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.324035 kubelet[2592]: E0913 00:26:24.323176 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.324035 kubelet[2592]: E0913 00:26:24.323256 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" Sep 13 00:26:24.324035 kubelet[2592]: E0913 00:26:24.323287 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" Sep 13 00:26:24.324226 kubelet[2592]: E0913 00:26:24.323343 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f6bc978d5-r28xt_calico-system(5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f6bc978d5-r28xt_calico-system(5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" podUID="5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9" Sep 13 00:26:24.324226 kubelet[2592]: E0913 00:26:24.324107 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.324226 kubelet[2592]: E0913 00:26:24.324166 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hj2qq" Sep 13 00:26:24.324442 kubelet[2592]: E0913 00:26:24.324200 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hj2qq" Sep 13 00:26:24.324442 kubelet[2592]: E0913 00:26:24.324251 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hj2qq_kube-system(c497ee4f-d0c8-467d-9216-5d2e88dee8c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hj2qq_kube-system(c497ee4f-d0c8-467d-9216-5d2e88dee8c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hj2qq" podUID="c497ee4f-d0c8-467d-9216-5d2e88dee8c7" Sep 13 00:26:24.333838 containerd[1464]: time="2025-09-13T00:26:24.333701924Z" level=error msg="Failed to destroy network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.335657 containerd[1464]: time="2025-09-13T00:26:24.335461192Z" level=error msg="encountered an error cleaning up failed sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.335657 containerd[1464]: time="2025-09-13T00:26:24.335535973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnfjg,Uid:7f1027a4-b75f-4b5f-b382-64bdc48ceda4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.336771 kubelet[2592]: E0913 00:26:24.336575 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.336771 kubelet[2592]: E0913 00:26:24.336642 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:24.336771 kubelet[2592]: E0913 00:26:24.336670 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mnfjg" Sep 13 00:26:24.337005 kubelet[2592]: E0913 00:26:24.336717 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mnfjg_calico-system(7f1027a4-b75f-4b5f-b382-64bdc48ceda4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mnfjg_calico-system(7f1027a4-b75f-4b5f-b382-64bdc48ceda4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:24.354171 containerd[1464]: time="2025-09-13T00:26:24.353969837Z" level=error msg="Failed to destroy network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.357524 containerd[1464]: time="2025-09-13T00:26:24.356993813Z" level=error msg="Failed to destroy network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.358140 containerd[1464]: time="2025-09-13T00:26:24.357990670Z" level=error msg="encountered an error cleaning up failed sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.358140 containerd[1464]: time="2025-09-13T00:26:24.358061340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-64p78,Uid:e0ab9580-fead-4e9f-abaa-e8f12c4c312a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.361506 containerd[1464]: time="2025-09-13T00:26:24.360396120Z" level=error msg="encountered an error cleaning up failed sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.361506 containerd[1464]: time="2025-09-13T00:26:24.360481254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c6f9fbd87-ffq8d,Uid:797236c0-8833-48f0-9caa-e94747de2be8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.361901 kubelet[2592]: E0913 00:26:24.360985 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.361901 kubelet[2592]: E0913 00:26:24.361044 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c6f9fbd87-ffq8d" Sep 13 00:26:24.361901 kubelet[2592]: E0913 00:26:24.361074 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c6f9fbd87-ffq8d" Sep 13 00:26:24.362220 kubelet[2592]: E0913 00:26:24.361127 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c6f9fbd87-ffq8d_calico-system(797236c0-8833-48f0-9caa-e94747de2be8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c6f9fbd87-ffq8d_calico-system(797236c0-8833-48f0-9caa-e94747de2be8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c6f9fbd87-ffq8d" podUID="797236c0-8833-48f0-9caa-e94747de2be8" Sep 13 00:26:24.362220 kubelet[2592]: E0913 00:26:24.362008 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.362220 kubelet[2592]: E0913 00:26:24.362054 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" Sep 13 00:26:24.362454 kubelet[2592]: E0913 00:26:24.362082 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" Sep 13 00:26:24.362454 kubelet[2592]: E0913 00:26:24.362135 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54854fc87c-64p78_calico-apiserver(e0ab9580-fead-4e9f-abaa-e8f12c4c312a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54854fc87c-64p78_calico-apiserver(e0ab9580-fead-4e9f-abaa-e8f12c4c312a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" podUID="e0ab9580-fead-4e9f-abaa-e8f12c4c312a" Sep 13 00:26:24.418771 containerd[1464]: time="2025-09-13T00:26:24.418690622Z" level=error msg="Failed to destroy network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.421467 containerd[1464]: time="2025-09-13T00:26:24.421409071Z" level=error msg="encountered an error cleaning up failed sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.421587 containerd[1464]: time="2025-09-13T00:26:24.421501255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-94fww,Uid:1340fb7a-e0db-4806-8c10-89545a7ba6fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.422770 kubelet[2592]: E0913 00:26:24.421797 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:24.422770 kubelet[2592]: E0913 00:26:24.421879 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-94fww" Sep 13 00:26:24.422770 kubelet[2592]: E0913 00:26:24.421912 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-94fww" Sep 13 00:26:24.423333 kubelet[2592]: E0913 00:26:24.421968 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-94fww_kube-system(1340fb7a-e0db-4806-8c10-89545a7ba6fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-94fww_kube-system(1340fb7a-e0db-4806-8c10-89545a7ba6fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-94fww" podUID="1340fb7a-e0db-4806-8c10-89545a7ba6fe" Sep 13 00:26:24.423661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a-shm.mount: Deactivated successfully. Sep 13 00:26:24.753162 kubelet[2592]: E0913 00:26:24.752996 2592 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:24.753162 kubelet[2592]: E0913 00:26:24.753134 2592 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1fcfa74-5cf5-4886-8c7c-add2cf297c71-config podName:d1fcfa74-5cf5-4886-8c7c-add2cf297c71 nodeName:}" failed. No retries permitted until 2025-09-13 00:26:25.253101667 +0000 UTC m=+32.789378347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d1fcfa74-5cf5-4886-8c7c-add2cf297c71-config") pod "goldmane-7988f88666-h77dv" (UID: "d1fcfa74-5cf5-4886-8c7c-add2cf297c71") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:26:24.823663 kubelet[2592]: I0913 00:26:24.823619 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:24.825076 containerd[1464]: time="2025-09-13T00:26:24.824571812Z" level=info msg="StopPodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\"" Sep 13 00:26:24.825076 containerd[1464]: time="2025-09-13T00:26:24.824834553Z" level=info msg="Ensure that sandbox 3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391 in task-service has been cleanup successfully" Sep 13 00:26:24.829083 kubelet[2592]: I0913 00:26:24.828561 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:24.829897 containerd[1464]: time="2025-09-13T00:26:24.829860052Z" level=info msg="StopPodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\"" Sep 13 00:26:24.830132 containerd[1464]: time="2025-09-13T00:26:24.830083785Z" level=info msg="Ensure that sandbox e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b in task-service has been cleanup successfully" Sep 13 00:26:24.837485 kubelet[2592]: I0913 00:26:24.836807 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:24.837711 containerd[1464]: time="2025-09-13T00:26:24.837682234Z" level=info msg="StopPodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\"" Sep 13 00:26:24.838318 containerd[1464]: time="2025-09-13T00:26:24.838286387Z" level=info msg="Ensure that sandbox 2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a in task-service has been cleanup successfully" Sep 13 00:26:24.844396 kubelet[2592]: I0913 00:26:24.844365 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:24.847038 containerd[1464]: time="2025-09-13T00:26:24.845826168Z" level=info msg="StopPodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\"" Sep 13 00:26:24.847038 containerd[1464]: time="2025-09-13T00:26:24.846049145Z" level=info msg="Ensure that sandbox 22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294 in task-service has been cleanup successfully" Sep 13 00:26:24.855263 kubelet[2592]: I0913 00:26:24.855238 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:24.859943 containerd[1464]: time="2025-09-13T00:26:24.859857439Z" level=info msg="StopPodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\"" Sep 13 00:26:24.863537 containerd[1464]: time="2025-09-13T00:26:24.863502784Z" level=info msg="Ensure that sandbox 3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99 in task-service has been cleanup successfully" Sep 13 00:26:24.867099 kubelet[2592]: I0913 00:26:24.867069 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:24.870722 containerd[1464]: time="2025-09-13T00:26:24.870687343Z" level=info msg="StopPodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\"" Sep 13 00:26:24.875422 containerd[1464]: time="2025-09-13T00:26:24.875228798Z" level=info msg="Ensure that sandbox 8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee in task-service has been cleanup successfully" Sep 13 00:26:24.888353 containerd[1464]: time="2025-09-13T00:26:24.888221564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:26:24.891326 kubelet[2592]: I0913 00:26:24.891209 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:24.896088 containerd[1464]: time="2025-09-13T00:26:24.895875173Z" level=info msg="StopPodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\"" Sep 13 00:26:24.898350 containerd[1464]: time="2025-09-13T00:26:24.898286518Z" level=info msg="Ensure that sandbox 320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d in task-service has been cleanup successfully" Sep 13 00:26:25.025592 containerd[1464]: time="2025-09-13T00:26:25.025267662Z" level=error msg="StopPodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" failed" error="failed to destroy network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.025922 kubelet[2592]: E0913 00:26:25.025589 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:25.025922 kubelet[2592]: E0913 00:26:25.025661 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391"} Sep 13 00:26:25.025922 kubelet[2592]: E0913 00:26:25.025772 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0ab9580-fead-4e9f-abaa-e8f12c4c312a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.027630 containerd[1464]: time="2025-09-13T00:26:25.027226718Z" level=error msg="StopPodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" failed" error="failed to destroy network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.027832 kubelet[2592]: E0913 00:26:25.025817 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0ab9580-fead-4e9f-abaa-e8f12c4c312a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" podUID="e0ab9580-fead-4e9f-abaa-e8f12c4c312a" Sep 13 00:26:25.028265 kubelet[2592]: E0913 00:26:25.028073 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:25.028265 kubelet[2592]: E0913 00:26:25.028126 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee"} Sep 13 00:26:25.028265 kubelet[2592]: E0913 00:26:25.028178 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"797236c0-8833-48f0-9caa-e94747de2be8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.028265 kubelet[2592]: E0913 00:26:25.028214 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"797236c0-8833-48f0-9caa-e94747de2be8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c6f9fbd87-ffq8d" podUID="797236c0-8833-48f0-9caa-e94747de2be8" Sep 13 00:26:25.030935 containerd[1464]: time="2025-09-13T00:26:25.030668143Z" level=error msg="StopPodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" failed" error="failed to destroy network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.031786 kubelet[2592]: E0913 00:26:25.031589 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:25.031786 kubelet[2592]: E0913 00:26:25.031640 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d"} Sep 13 00:26:25.031786 kubelet[2592]: E0913 00:26:25.031719 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c497ee4f-d0c8-467d-9216-5d2e88dee8c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.031786 kubelet[2592]: E0913 00:26:25.031771 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c497ee4f-d0c8-467d-9216-5d2e88dee8c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hj2qq" podUID="c497ee4f-d0c8-467d-9216-5d2e88dee8c7" Sep 13 00:26:25.050697 containerd[1464]: time="2025-09-13T00:26:25.050142533Z" level=error msg="StopPodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" failed" error="failed to destroy network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.050886 kubelet[2592]: E0913 00:26:25.050704 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:25.050886 kubelet[2592]: E0913 00:26:25.050797 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b"} Sep 13 00:26:25.050886 kubelet[2592]: E0913 00:26:25.050846 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"305f5d4c-806f-42ee-82af-c8e6eacc2f74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.051165 kubelet[2592]: E0913 00:26:25.050881 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"305f5d4c-806f-42ee-82af-c8e6eacc2f74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" podUID="305f5d4c-806f-42ee-82af-c8e6eacc2f74" Sep 13 00:26:25.056581 containerd[1464]: time="2025-09-13T00:26:25.056501610Z" level=error msg="StopPodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" failed" error="failed to destroy network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.056992 kubelet[2592]: E0913 00:26:25.056817 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:25.056992 kubelet[2592]: E0913 00:26:25.056881 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a"} Sep 13 00:26:25.056992 kubelet[2592]: E0913 00:26:25.056926 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1340fb7a-e0db-4806-8c10-89545a7ba6fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.056992 kubelet[2592]: E0913 00:26:25.056965 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1340fb7a-e0db-4806-8c10-89545a7ba6fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-94fww" podUID="1340fb7a-e0db-4806-8c10-89545a7ba6fe" Sep 13 00:26:25.059053 containerd[1464]: time="2025-09-13T00:26:25.058963705Z" level=error msg="StopPodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" failed" error="failed to destroy network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.059675 kubelet[2592]: E0913 00:26:25.059443 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:25.059675 kubelet[2592]: E0913 00:26:25.059506 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294"} Sep 13 00:26:25.059675 kubelet[2592]: E0913 00:26:25.059568 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.059675 kubelet[2592]: E0913 00:26:25.059606 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" podUID="5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9" Sep 13 00:26:25.064652 containerd[1464]: time="2025-09-13T00:26:25.064597538Z" level=error msg="StopPodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" failed" error="failed to destroy network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.066397 kubelet[2592]: E0913 00:26:25.064894 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:25.066397 kubelet[2592]: E0913 00:26:25.064992 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99"} Sep 13 00:26:25.066397 kubelet[2592]: E0913 00:26:25.065039 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.066397 kubelet[2592]: E0913 00:26:25.065072 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f1027a4-b75f-4b5f-b382-64bdc48ceda4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mnfjg" podUID="7f1027a4-b75f-4b5f-b382-64bdc48ceda4" Sep 13 00:26:25.320962 containerd[1464]: time="2025-09-13T00:26:25.320791767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h77dv,Uid:d1fcfa74-5cf5-4886-8c7c-add2cf297c71,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:25.418517 containerd[1464]: time="2025-09-13T00:26:25.418443692Z" level=error msg="Failed to destroy network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.421272 containerd[1464]: time="2025-09-13T00:26:25.421155520Z" level=error msg="encountered an error cleaning up failed sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.421272 containerd[1464]: time="2025-09-13T00:26:25.421228270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h77dv,Uid:d1fcfa74-5cf5-4886-8c7c-add2cf297c71,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.423795 kubelet[2592]: E0913 00:26:25.421596 2592 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.423795 kubelet[2592]: E0913 00:26:25.421703 2592 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-h77dv" Sep 13 00:26:25.423795 kubelet[2592]: E0913 00:26:25.421798 2592 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-h77dv" Sep 13 00:26:25.423529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90-shm.mount: Deactivated successfully. Sep 13 00:26:25.424616 kubelet[2592]: E0913 00:26:25.423621 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-h77dv_calico-system(d1fcfa74-5cf5-4886-8c7c-add2cf297c71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-h77dv_calico-system(d1fcfa74-5cf5-4886-8c7c-add2cf297c71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-h77dv" podUID="d1fcfa74-5cf5-4886-8c7c-add2cf297c71" Sep 13 00:26:25.897134 kubelet[2592]: I0913 00:26:25.895006 2592 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:25.899987 containerd[1464]: time="2025-09-13T00:26:25.899914208Z" level=info msg="StopPodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\"" Sep 13 00:26:25.900301 containerd[1464]: time="2025-09-13T00:26:25.900221383Z" level=info msg="Ensure that sandbox 47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90 in task-service has been cleanup successfully" Sep 13 00:26:25.953872 containerd[1464]: time="2025-09-13T00:26:25.953672126Z" level=error msg="StopPodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" failed" error="failed to destroy network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:26:25.954635 kubelet[2592]: E0913 00:26:25.954349 2592 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:25.954635 kubelet[2592]: E0913 00:26:25.954453 2592 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90"} Sep 13 00:26:25.954635 kubelet[2592]: E0913 00:26:25.954515 2592 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1fcfa74-5cf5-4886-8c7c-add2cf297c71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:26:25.954635 kubelet[2592]: E0913 00:26:25.954582 2592 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1fcfa74-5cf5-4886-8c7c-add2cf297c71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-h77dv" podUID="d1fcfa74-5cf5-4886-8c7c-add2cf297c71" Sep 13 00:26:31.974694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876279724.mount: Deactivated successfully. Sep 13 00:26:32.005164 containerd[1464]: time="2025-09-13T00:26:32.005091386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:32.006532 containerd[1464]: time="2025-09-13T00:26:32.006284393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:26:32.008775 containerd[1464]: time="2025-09-13T00:26:32.007560069Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:32.010245 containerd[1464]: time="2025-09-13T00:26:32.010179656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:32.011181 containerd[1464]: time="2025-09-13T00:26:32.011134191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.121320461s" Sep 13 00:26:32.011775 containerd[1464]: time="2025-09-13T00:26:32.011187054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:26:32.034132 containerd[1464]: time="2025-09-13T00:26:32.034006253Z" level=info msg="CreateContainer within sandbox \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:26:32.059176 containerd[1464]: time="2025-09-13T00:26:32.059116627Z" level=info msg="CreateContainer within sandbox \"31e2b15ab88fceab37c3550579489030f58408b9679e448c95b1966654133ec3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f\"" Sep 13 00:26:32.061419 containerd[1464]: time="2025-09-13T00:26:32.060092030Z" level=info msg="StartContainer for \"7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f\"" Sep 13 00:26:32.108009 systemd[1]: Started cri-containerd-7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f.scope - libcontainer container 7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f. Sep 13 00:26:32.151556 containerd[1464]: time="2025-09-13T00:26:32.151505200Z" level=info msg="StartContainer for \"7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f\" returns successfully" Sep 13 00:26:32.279591 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:26:32.279776 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:26:32.407774 containerd[1464]: time="2025-09-13T00:26:32.407168165Z" level=info msg="StopPodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\"" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.515 [INFO][3819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.515 [INFO][3819] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" iface="eth0" netns="/var/run/netns/cni-14821be1-8fc4-d1c1-bac5-861d96bfbab4" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.515 [INFO][3819] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" iface="eth0" netns="/var/run/netns/cni-14821be1-8fc4-d1c1-bac5-861d96bfbab4" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.516 [INFO][3819] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" iface="eth0" netns="/var/run/netns/cni-14821be1-8fc4-d1c1-bac5-861d96bfbab4" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.516 [INFO][3819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.516 [INFO][3819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.576 [INFO][3831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.577 [INFO][3831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.577 [INFO][3831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.588 [WARNING][3831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.588 [INFO][3831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.590 [INFO][3831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:32.597306 containerd[1464]: 2025-09-13 00:26:32.594 [INFO][3819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:32.599341 containerd[1464]: time="2025-09-13T00:26:32.597904676Z" level=info msg="TearDown network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" successfully" Sep 13 00:26:32.599341 containerd[1464]: time="2025-09-13T00:26:32.597946766Z" level=info msg="StopPodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" returns successfully" Sep 13 00:26:32.721624 kubelet[2592]: I0913 00:26:32.721110 2592 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/797236c0-8833-48f0-9caa-e94747de2be8-whisker-backend-key-pair\") pod \"797236c0-8833-48f0-9caa-e94747de2be8\" (UID: \"797236c0-8833-48f0-9caa-e94747de2be8\") " Sep 13 00:26:32.721624 kubelet[2592]: I0913 00:26:32.721534 2592 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/797236c0-8833-48f0-9caa-e94747de2be8-whisker-ca-bundle\") pod \"797236c0-8833-48f0-9caa-e94747de2be8\" (UID: \"797236c0-8833-48f0-9caa-e94747de2be8\") " Sep 13 00:26:32.721624 kubelet[2592]: I0913 00:26:32.721582 2592 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dnhx\" (UniqueName: \"kubernetes.io/projected/797236c0-8833-48f0-9caa-e94747de2be8-kube-api-access-8dnhx\") pod \"797236c0-8833-48f0-9caa-e94747de2be8\" (UID: \"797236c0-8833-48f0-9caa-e94747de2be8\") " Sep 13 00:26:32.724765 kubelet[2592]: I0913 00:26:32.723440 2592 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797236c0-8833-48f0-9caa-e94747de2be8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "797236c0-8833-48f0-9caa-e94747de2be8" (UID: "797236c0-8833-48f0-9caa-e94747de2be8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:26:32.730047 kubelet[2592]: I0913 00:26:32.730005 2592 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797236c0-8833-48f0-9caa-e94747de2be8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "797236c0-8833-48f0-9caa-e94747de2be8" (UID: "797236c0-8833-48f0-9caa-e94747de2be8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:26:32.730983 kubelet[2592]: I0913 00:26:32.730949 2592 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797236c0-8833-48f0-9caa-e94747de2be8-kube-api-access-8dnhx" (OuterVolumeSpecName: "kube-api-access-8dnhx") pod "797236c0-8833-48f0-9caa-e94747de2be8" (UID: "797236c0-8833-48f0-9caa-e94747de2be8"). InnerVolumeSpecName "kube-api-access-8dnhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:26:32.822709 kubelet[2592]: I0913 00:26:32.822648 2592 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/797236c0-8833-48f0-9caa-e94747de2be8-whisker-ca-bundle\") on node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" DevicePath \"\"" Sep 13 00:26:32.822709 kubelet[2592]: I0913 00:26:32.822693 2592 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dnhx\" (UniqueName: \"kubernetes.io/projected/797236c0-8833-48f0-9caa-e94747de2be8-kube-api-access-8dnhx\") on node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" DevicePath \"\"" Sep 13 00:26:32.822709 kubelet[2592]: I0913 00:26:32.822715 2592 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/797236c0-8833-48f0-9caa-e94747de2be8-whisker-backend-key-pair\") on node \"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf\" DevicePath \"\"" Sep 13 00:26:32.921292 systemd[1]: Removed slice kubepods-besteffort-pod797236c0_8833_48f0_9caa_e94747de2be8.slice - libcontainer container kubepods-besteffort-pod797236c0_8833_48f0_9caa_e94747de2be8.slice. Sep 13 00:26:32.941281 kubelet[2592]: I0913 00:26:32.940521 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9tthx" podStartSLOduration=2.248700356 podStartE2EDuration="19.940500051s" podCreationTimestamp="2025-09-13 00:26:13 +0000 UTC" firstStartedPulling="2025-09-13 00:26:14.320657258 +0000 UTC m=+21.856933937" lastFinishedPulling="2025-09-13 00:26:32.01245696 +0000 UTC m=+39.548733632" observedRunningTime="2025-09-13 00:26:32.939844943 +0000 UTC m=+40.476121627" watchObservedRunningTime="2025-09-13 00:26:32.940500051 +0000 UTC m=+40.476776771" Sep 13 00:26:32.975265 systemd[1]: run-netns-cni\x2d14821be1\x2d8fc4\x2dd1c1\x2dbac5\x2d861d96bfbab4.mount: Deactivated successfully. Sep 13 00:26:32.976072 systemd[1]: var-lib-kubelet-pods-797236c0\x2d8833\x2d48f0\x2d9caa\x2de94747de2be8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8dnhx.mount: Deactivated successfully. Sep 13 00:26:32.976329 systemd[1]: var-lib-kubelet-pods-797236c0\x2d8833\x2d48f0\x2d9caa\x2de94747de2be8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:26:33.022025 systemd[1]: Created slice kubepods-besteffort-poddd0c9423_9287_4549_b78b_e616c61e289c.slice - libcontainer container kubepods-besteffort-poddd0c9423_9287_4549_b78b_e616c61e289c.slice. Sep 13 00:26:33.124820 kubelet[2592]: I0913 00:26:33.124759 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd0c9423-9287-4549-b78b-e616c61e289c-whisker-backend-key-pair\") pod \"whisker-6cf77cb774-f7fj2\" (UID: \"dd0c9423-9287-4549-b78b-e616c61e289c\") " pod="calico-system/whisker-6cf77cb774-f7fj2" Sep 13 00:26:33.125075 kubelet[2592]: I0913 00:26:33.124838 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xd5l\" (UniqueName: \"kubernetes.io/projected/dd0c9423-9287-4549-b78b-e616c61e289c-kube-api-access-2xd5l\") pod \"whisker-6cf77cb774-f7fj2\" (UID: \"dd0c9423-9287-4549-b78b-e616c61e289c\") " pod="calico-system/whisker-6cf77cb774-f7fj2" Sep 13 00:26:33.125075 kubelet[2592]: I0913 00:26:33.124875 2592 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd0c9423-9287-4549-b78b-e616c61e289c-whisker-ca-bundle\") pod \"whisker-6cf77cb774-f7fj2\" (UID: \"dd0c9423-9287-4549-b78b-e616c61e289c\") " pod="calico-system/whisker-6cf77cb774-f7fj2" Sep 13 00:26:33.327807 containerd[1464]: time="2025-09-13T00:26:33.327590352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf77cb774-f7fj2,Uid:dd0c9423-9287-4549-b78b-e616c61e289c,Namespace:calico-system,Attempt:0,}" Sep 13 00:26:33.491802 systemd-networkd[1371]: cali678a2011879: Link UP Sep 13 00:26:33.493314 systemd-networkd[1371]: cali678a2011879: Gained carrier Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.385 [INFO][3855] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.399 [INFO][3855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0 whisker-6cf77cb774- calico-system dd0c9423-9287-4549-b78b-e616c61e289c 912 0 2025-09-13 00:26:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cf77cb774 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf whisker-6cf77cb774-f7fj2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali678a2011879 [] [] }} ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.399 [INFO][3855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.435 [INFO][3866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" HandleID="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.435 [INFO][3866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" HandleID="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"whisker-6cf77cb774-f7fj2", "timestamp":"2025-09-13 00:26:33.435423122 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.435 [INFO][3866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.435 [INFO][3866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.435 [INFO][3866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.445 [INFO][3866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.450 [INFO][3866] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.455 [INFO][3866] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.458 [INFO][3866] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.461 [INFO][3866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.461 [INFO][3866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.463 [INFO][3866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579 Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.468 [INFO][3866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.477 [INFO][3866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.193/26] block=192.168.81.192/26 handle="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.477 [INFO][3866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.193/26] handle="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.477 [INFO][3866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:33.515517 containerd[1464]: 2025-09-13 00:26:33.477 [INFO][3866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.193/26] IPv6=[] ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" HandleID="k8s-pod-network.73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.517277 containerd[1464]: 2025-09-13 00:26:33.479 [INFO][3855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0", GenerateName:"whisker-6cf77cb774-", Namespace:"calico-system", SelfLink:"", UID:"dd0c9423-9287-4549-b78b-e616c61e289c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf77cb774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"whisker-6cf77cb774-f7fj2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali678a2011879", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:33.517277 containerd[1464]: 2025-09-13 00:26:33.479 [INFO][3855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.193/32] ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.517277 containerd[1464]: 2025-09-13 00:26:33.480 [INFO][3855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali678a2011879 ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.517277 containerd[1464]: 2025-09-13 00:26:33.493 [INFO][3855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.517277 containerd[1464]: 2025-09-13 00:26:33.494 [INFO][3855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0", GenerateName:"whisker-6cf77cb774-", Namespace:"calico-system", SelfLink:"", UID:"dd0c9423-9287-4549-b78b-e616c61e289c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cf77cb774", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579", Pod:"whisker-6cf77cb774-f7fj2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali678a2011879", MAC:"62:0c:9f:7c:72:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:33.517277 containerd[1464]: 2025-09-13 00:26:33.510 [INFO][3855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579" Namespace="calico-system" Pod="whisker-6cf77cb774-f7fj2" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--6cf77cb774--f7fj2-eth0" Sep 13 00:26:33.551825 containerd[1464]: time="2025-09-13T00:26:33.551578698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:33.551825 containerd[1464]: time="2025-09-13T00:26:33.551650178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:33.551825 containerd[1464]: time="2025-09-13T00:26:33.551687822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:33.553834 containerd[1464]: time="2025-09-13T00:26:33.552837563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:33.577989 systemd[1]: Started cri-containerd-73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579.scope - libcontainer container 73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579. Sep 13 00:26:33.636686 containerd[1464]: time="2025-09-13T00:26:33.636353841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf77cb774-f7fj2,Uid:dd0c9423-9287-4549-b78b-e616c61e289c,Namespace:calico-system,Attempt:0,} returns sandbox id \"73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579\"" Sep 13 00:26:33.639828 containerd[1464]: time="2025-09-13T00:26:33.639506479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:26:34.624895 containerd[1464]: time="2025-09-13T00:26:34.624820930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:34.626245 containerd[1464]: time="2025-09-13T00:26:34.626170254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:26:34.627574 containerd[1464]: time="2025-09-13T00:26:34.627511574Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:34.630903 containerd[1464]: time="2025-09-13T00:26:34.630840333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:34.632179 containerd[1464]: time="2025-09-13T00:26:34.631975737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 992.422888ms" Sep 13 00:26:34.632179 containerd[1464]: time="2025-09-13T00:26:34.632029326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:26:34.636266 containerd[1464]: time="2025-09-13T00:26:34.636217899Z" level=info msg="CreateContainer within sandbox \"73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:26:34.659790 containerd[1464]: time="2025-09-13T00:26:34.659711170Z" level=info msg="CreateContainer within sandbox \"73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f06ecf56f606449b37912ba8dc9cd3240360746eb71a7d9a6a0d6bc07c52976a\"" Sep 13 00:26:34.660640 containerd[1464]: time="2025-09-13T00:26:34.660601599Z" level=info msg="StartContainer for \"f06ecf56f606449b37912ba8dc9cd3240360746eb71a7d9a6a0d6bc07c52976a\"" Sep 13 00:26:34.678323 kubelet[2592]: I0913 00:26:34.677733 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797236c0-8833-48f0-9caa-e94747de2be8" path="/var/lib/kubelet/pods/797236c0-8833-48f0-9caa-e94747de2be8/volumes" Sep 13 00:26:34.720950 systemd[1]: Started cri-containerd-f06ecf56f606449b37912ba8dc9cd3240360746eb71a7d9a6a0d6bc07c52976a.scope - libcontainer container f06ecf56f606449b37912ba8dc9cd3240360746eb71a7d9a6a0d6bc07c52976a. Sep 13 00:26:34.785654 containerd[1464]: time="2025-09-13T00:26:34.785488692Z" level=info msg="StartContainer for \"f06ecf56f606449b37912ba8dc9cd3240360746eb71a7d9a6a0d6bc07c52976a\" returns successfully" Sep 13 00:26:34.788012 containerd[1464]: time="2025-09-13T00:26:34.787829531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:26:35.267948 systemd-networkd[1371]: cali678a2011879: Gained IPv6LL Sep 13 00:26:35.573680 kubelet[2592]: I0913 00:26:35.573529 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:26:35.679944 containerd[1464]: time="2025-09-13T00:26:35.679892509Z" level=info msg="StopPodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\"" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.835 [INFO][4085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.836 [INFO][4085] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" iface="eth0" netns="/var/run/netns/cni-2a111f81-6710-59de-7ada-97184a35a5a7" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.838 [INFO][4085] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" iface="eth0" netns="/var/run/netns/cni-2a111f81-6710-59de-7ada-97184a35a5a7" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.838 [INFO][4085] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" iface="eth0" netns="/var/run/netns/cni-2a111f81-6710-59de-7ada-97184a35a5a7" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.838 [INFO][4085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.838 [INFO][4085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.887 [INFO][4092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.888 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.888 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.898 [WARNING][4092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.898 [INFO][4092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.901 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:35.906526 containerd[1464]: 2025-09-13 00:26:35.903 [INFO][4085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:35.911761 containerd[1464]: time="2025-09-13T00:26:35.911007884Z" level=info msg="TearDown network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" successfully" Sep 13 00:26:35.911761 containerd[1464]: time="2025-09-13T00:26:35.911051976Z" level=info msg="StopPodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" returns successfully" Sep 13 00:26:35.916444 containerd[1464]: time="2025-09-13T00:26:35.916404326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-64p78,Uid:e0ab9580-fead-4e9f-abaa-e8f12c4c312a,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:26:35.916723 systemd[1]: run-netns-cni\x2d2a111f81\x2d6710\x2d59de\x2d7ada\x2d97184a35a5a7.mount: Deactivated successfully. Sep 13 00:26:36.127564 systemd-networkd[1371]: cali5c5353a232d: Link UP Sep 13 00:26:36.128614 systemd-networkd[1371]: cali5c5353a232d: Gained carrier Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.001 [INFO][4100] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.023 [INFO][4100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0 calico-apiserver-54854fc87c- calico-apiserver e0ab9580-fead-4e9f-abaa-e8f12c4c312a 935 0 2025-09-13 00:26:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54854fc87c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf calico-apiserver-54854fc87c-64p78 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c5353a232d [] [] }} ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.023 [INFO][4100] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.063 [INFO][4113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" HandleID="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.063 [INFO][4113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" HandleID="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"calico-apiserver-54854fc87c-64p78", "timestamp":"2025-09-13 00:26:36.063154145 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.063 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.063 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.063 [INFO][4113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.072 [INFO][4113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.077 [INFO][4113] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.085 [INFO][4113] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.088 [INFO][4113] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.091 [INFO][4113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.091 [INFO][4113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.093 [INFO][4113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.101 [INFO][4113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.113 [INFO][4113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.194/26] block=192.168.81.192/26 handle="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.113 [INFO][4113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.194/26] handle="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.113 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:36.158394 containerd[1464]: 2025-09-13 00:26:36.113 [INFO][4113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.194/26] IPv6=[] ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" HandleID="k8s-pod-network.cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.160502 containerd[1464]: 2025-09-13 00:26:36.121 [INFO][4100] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0ab9580-fead-4e9f-abaa-e8f12c4c312a", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"calico-apiserver-54854fc87c-64p78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5353a232d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:36.160502 containerd[1464]: 2025-09-13 00:26:36.121 [INFO][4100] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.194/32] ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.160502 containerd[1464]: 2025-09-13 00:26:36.122 [INFO][4100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c5353a232d ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.160502 containerd[1464]: 2025-09-13 00:26:36.128 [INFO][4100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.160502 containerd[1464]: 2025-09-13 00:26:36.129 [INFO][4100] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0ab9580-fead-4e9f-abaa-e8f12c4c312a", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace", Pod:"calico-apiserver-54854fc87c-64p78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5353a232d", MAC:"ce:18:5b:fe:9b:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:36.160502 containerd[1464]: 2025-09-13 00:26:36.155 [INFO][4100] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-64p78" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:36.213562 containerd[1464]: time="2025-09-13T00:26:36.213340750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:36.213562 containerd[1464]: time="2025-09-13T00:26:36.213403320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:36.214387 containerd[1464]: time="2025-09-13T00:26:36.213451125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:36.214597 containerd[1464]: time="2025-09-13T00:26:36.214278656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:36.276989 systemd[1]: Started cri-containerd-cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace.scope - libcontainer container cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace. Sep 13 00:26:36.481717 containerd[1464]: time="2025-09-13T00:26:36.481079469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-64p78,Uid:e0ab9580-fead-4e9f-abaa-e8f12c4c312a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace\"" Sep 13 00:26:36.676297 containerd[1464]: time="2025-09-13T00:26:36.675640098Z" level=info msg="StopPodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\"" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.847 [INFO][4204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.849 [INFO][4204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" iface="eth0" netns="/var/run/netns/cni-be25cadb-e8e5-b801-7385-8f7133110f74" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.851 [INFO][4204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" iface="eth0" netns="/var/run/netns/cni-be25cadb-e8e5-b801-7385-8f7133110f74" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.851 [INFO][4204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" iface="eth0" netns="/var/run/netns/cni-be25cadb-e8e5-b801-7385-8f7133110f74" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.852 [INFO][4204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.852 [INFO][4204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.947 [INFO][4216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.948 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.948 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.963 [WARNING][4216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.963 [INFO][4216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.967 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:36.978778 containerd[1464]: 2025-09-13 00:26:36.972 [INFO][4204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:36.988934 containerd[1464]: time="2025-09-13T00:26:36.988773163Z" level=info msg="TearDown network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" successfully" Sep 13 00:26:36.992043 containerd[1464]: time="2025-09-13T00:26:36.991899973Z" level=info msg="StopPodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" returns successfully" Sep 13 00:26:36.996540 kernel: bpftool[4230]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:26:36.995430 systemd[1]: run-netns-cni\x2dbe25cadb\x2de8e5\x2db801\x2d7385\x2d8f7133110f74.mount: Deactivated successfully. Sep 13 00:26:37.002225 containerd[1464]: time="2025-09-13T00:26:37.002180377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnfjg,Uid:7f1027a4-b75f-4b5f-b382-64bdc48ceda4,Namespace:calico-system,Attempt:1,}" Sep 13 00:26:37.424015 systemd-networkd[1371]: cali03a0f67b447: Link UP Sep 13 00:26:37.424472 systemd-networkd[1371]: cali03a0f67b447: Gained carrier Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.212 [INFO][4232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0 csi-node-driver- calico-system 7f1027a4-b75f-4b5f-b382-64bdc48ceda4 942 0 2025-09-13 00:26:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf csi-node-driver-mnfjg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali03a0f67b447 [] [] }} ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.212 [INFO][4232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.330 [INFO][4260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" HandleID="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.333 [INFO][4260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" HandleID="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369e30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"csi-node-driver-mnfjg", "timestamp":"2025-09-13 00:26:37.330605757 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.333 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.333 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.333 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.352 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.360 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.377 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.380 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.384 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.385 [INFO][4260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.387 [INFO][4260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9 Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.394 [INFO][4260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.404 [INFO][4260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.195/26] block=192.168.81.192/26 handle="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.404 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.195/26] handle="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.405 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:37.459632 containerd[1464]: 2025-09-13 00:26:37.405 [INFO][4260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.195/26] IPv6=[] ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" HandleID="k8s-pod-network.f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.460877 containerd[1464]: 2025-09-13 00:26:37.415 [INFO][4232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f1027a4-b75f-4b5f-b382-64bdc48ceda4", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"csi-node-driver-mnfjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a0f67b447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:37.460877 containerd[1464]: 2025-09-13 00:26:37.415 [INFO][4232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.195/32] ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.460877 containerd[1464]: 2025-09-13 00:26:37.416 [INFO][4232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03a0f67b447 ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.460877 containerd[1464]: 2025-09-13 00:26:37.423 [INFO][4232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.460877 containerd[1464]: 2025-09-13 00:26:37.425 [INFO][4232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f1027a4-b75f-4b5f-b382-64bdc48ceda4", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9", Pod:"csi-node-driver-mnfjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a0f67b447", MAC:"a6:b2:a6:83:79:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:37.460877 containerd[1464]: 2025-09-13 00:26:37.450 [INFO][4232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9" Namespace="calico-system" Pod="csi-node-driver-mnfjg" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:37.548313 containerd[1464]: time="2025-09-13T00:26:37.547957777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:37.548313 containerd[1464]: time="2025-09-13T00:26:37.548050686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:37.548313 containerd[1464]: time="2025-09-13T00:26:37.548078655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:37.548313 containerd[1464]: time="2025-09-13T00:26:37.548206791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:37.623038 systemd[1]: Started cri-containerd-f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9.scope - libcontainer container f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9. Sep 13 00:26:37.667840 containerd[1464]: time="2025-09-13T00:26:37.666626959Z" level=info msg="StopPodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\"" Sep 13 00:26:37.676680 containerd[1464]: time="2025-09-13T00:26:37.676436106Z" level=info msg="StopPodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\"" Sep 13 00:26:37.814567 containerd[1464]: time="2025-09-13T00:26:37.814147275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnfjg,Uid:7f1027a4-b75f-4b5f-b382-64bdc48ceda4,Namespace:calico-system,Attempt:1,} returns sandbox id \"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9\"" Sep 13 00:26:37.829259 systemd-networkd[1371]: cali5c5353a232d: Gained IPv6LL Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:37.877 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:37.886 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" iface="eth0" netns="/var/run/netns/cni-b5f035ab-d549-4818-9511-255fba17c152" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:37.890 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" iface="eth0" netns="/var/run/netns/cni-b5f035ab-d549-4818-9511-255fba17c152" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:37.890 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" iface="eth0" netns="/var/run/netns/cni-b5f035ab-d549-4818-9511-255fba17c152" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:37.890 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:37.890 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.013 [INFO][4360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.014 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.014 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.035 [WARNING][4360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.035 [INFO][4360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.040 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:38.050605 containerd[1464]: 2025-09-13 00:26:38.043 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:38.050605 containerd[1464]: time="2025-09-13T00:26:38.050256837Z" level=info msg="TearDown network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" successfully" Sep 13 00:26:38.050605 containerd[1464]: time="2025-09-13T00:26:38.050292191Z" level=info msg="StopPodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" returns successfully" Sep 13 00:26:38.064684 containerd[1464]: time="2025-09-13T00:26:38.058894797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6bc978d5-r28xt,Uid:5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9,Namespace:calico-system,Attempt:1,}" Sep 13 00:26:38.063641 systemd[1]: run-netns-cni\x2db5f035ab\x2dd549\x2d4818\x2d9511\x2d255fba17c152.mount: Deactivated successfully. Sep 13 00:26:38.122267 systemd-networkd[1371]: vxlan.calico: Link UP Sep 13 00:26:38.122279 systemd-networkd[1371]: vxlan.calico: Gained carrier Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:37.883 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:37.883 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" iface="eth0" netns="/var/run/netns/cni-780b4605-18ba-8a8c-3f78-f3c37cef4b67" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:37.883 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" iface="eth0" netns="/var/run/netns/cni-780b4605-18ba-8a8c-3f78-f3c37cef4b67" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:37.898 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" iface="eth0" netns="/var/run/netns/cni-780b4605-18ba-8a8c-3f78-f3c37cef4b67" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:37.898 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:37.898 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.080 [INFO][4362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.080 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.080 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.118 [WARNING][4362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.118 [INFO][4362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.121 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:38.134477 containerd[1464]: 2025-09-13 00:26:38.127 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:38.137733 containerd[1464]: time="2025-09-13T00:26:38.137631437Z" level=info msg="TearDown network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" successfully" Sep 13 00:26:38.137733 containerd[1464]: time="2025-09-13T00:26:38.137684152Z" level=info msg="StopPodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" returns successfully" Sep 13 00:26:38.147884 containerd[1464]: time="2025-09-13T00:26:38.146939876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-94fww,Uid:1340fb7a-e0db-4806-8c10-89545a7ba6fe,Namespace:kube-system,Attempt:1,}" Sep 13 00:26:38.147528 systemd[1]: run-netns-cni\x2d780b4605\x2d18ba\x2d8a8c\x2d3f78\x2df3c37cef4b67.mount: Deactivated successfully. Sep 13 00:26:38.546382 systemd-networkd[1371]: cali175ba461d46: Link UP Sep 13 00:26:38.549256 systemd-networkd[1371]: cali175ba461d46: Gained carrier Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.305 [INFO][4398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0 coredns-7c65d6cfc9- kube-system 1340fb7a-e0db-4806-8c10-89545a7ba6fe 953 0 2025-09-13 00:25:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf coredns-7c65d6cfc9-94fww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali175ba461d46 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.306 [INFO][4398] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.448 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" HandleID="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.451 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" HandleID="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f790), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"coredns-7c65d6cfc9-94fww", "timestamp":"2025-09-13 00:26:38.448941784 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.453 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.453 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.453 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.472 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.479 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.492 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.495 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.500 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.502 [INFO][4424] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.506 [INFO][4424] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.512 [INFO][4424] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.524 [INFO][4424] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.196/26] block=192.168.81.192/26 handle="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.524 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.196/26] handle="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.524 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:38.591770 containerd[1464]: 2025-09-13 00:26:38.524 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.196/26] IPv6=[] ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" HandleID="k8s-pod-network.14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.592916 containerd[1464]: 2025-09-13 00:26:38.535 [INFO][4398] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1340fb7a-e0db-4806-8c10-89545a7ba6fe", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"coredns-7c65d6cfc9-94fww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali175ba461d46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:38.592916 containerd[1464]: 2025-09-13 00:26:38.535 [INFO][4398] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.196/32] ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.592916 containerd[1464]: 2025-09-13 00:26:38.535 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali175ba461d46 ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.592916 containerd[1464]: 2025-09-13 00:26:38.554 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.592916 containerd[1464]: 2025-09-13 00:26:38.556 [INFO][4398] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1340fb7a-e0db-4806-8c10-89545a7ba6fe", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d", Pod:"coredns-7c65d6cfc9-94fww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali175ba461d46", MAC:"8e:71:91:cf:5a:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:38.592916 containerd[1464]: 2025-09-13 00:26:38.579 [INFO][4398] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-94fww" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:38.643213 containerd[1464]: time="2025-09-13T00:26:38.641862823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:38.645492 containerd[1464]: time="2025-09-13T00:26:38.645427391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:26:38.648773 containerd[1464]: time="2025-09-13T00:26:38.647338896Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:38.655467 containerd[1464]: time="2025-09-13T00:26:38.655243871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:38.657786 containerd[1464]: time="2025-09-13T00:26:38.655563789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:38.657786 containerd[1464]: time="2025-09-13T00:26:38.657544873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:38.657786 containerd[1464]: time="2025-09-13T00:26:38.657572772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:38.658711 containerd[1464]: time="2025-09-13T00:26:38.656477628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.868491855s" Sep 13 00:26:38.658977 containerd[1464]: time="2025-09-13T00:26:38.658854505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:26:38.659614 containerd[1464]: time="2025-09-13T00:26:38.659334991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:38.664139 containerd[1464]: time="2025-09-13T00:26:38.664085877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:26:38.669996 containerd[1464]: time="2025-09-13T00:26:38.664410812Z" level=info msg="CreateContainer within sandbox \"73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:26:38.669996 containerd[1464]: time="2025-09-13T00:26:38.668362312Z" level=info msg="StopPodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\"" Sep 13 00:26:38.676577 systemd-networkd[1371]: cali28625e1eea8: Link UP Sep 13 00:26:38.677007 systemd-networkd[1371]: cali28625e1eea8: Gained carrier Sep 13 00:26:38.682105 containerd[1464]: time="2025-09-13T00:26:38.681973067Z" level=info msg="StopPodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\"" Sep 13 00:26:38.696171 containerd[1464]: time="2025-09-13T00:26:38.689876895Z" level=info msg="StopPodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\"" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.372 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0 calico-kube-controllers-7f6bc978d5- calico-system 5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9 952 0 2025-09-13 00:26:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f6bc978d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf calico-kube-controllers-7f6bc978d5-r28xt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali28625e1eea8 [] [] }} ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.373 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.505 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" HandleID="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.507 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" HandleID="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"calico-kube-controllers-7f6bc978d5-r28xt", "timestamp":"2025-09-13 00:26:38.505167661 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.507 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.524 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.524 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.572 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.591 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.606 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.610 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.618 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.619 [INFO][4429] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.626 [INFO][4429] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.639 [INFO][4429] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.656 [INFO][4429] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.197/26] block=192.168.81.192/26 handle="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.657 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.197/26] handle="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.658 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:38.750016 containerd[1464]: 2025-09-13 00:26:38.658 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.197/26] IPv6=[] ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" HandleID="k8s-pod-network.bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.751682 containerd[1464]: 2025-09-13 00:26:38.666 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0", GenerateName:"calico-kube-controllers-7f6bc978d5-", Namespace:"calico-system", SelfLink:"", UID:"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6bc978d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"calico-kube-controllers-7f6bc978d5-r28xt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28625e1eea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:38.751682 containerd[1464]: 2025-09-13 00:26:38.667 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.197/32] ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.751682 containerd[1464]: 2025-09-13 00:26:38.668 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28625e1eea8 ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.751682 containerd[1464]: 2025-09-13 00:26:38.676 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.751682 containerd[1464]: 2025-09-13 00:26:38.681 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0", GenerateName:"calico-kube-controllers-7f6bc978d5-", Namespace:"calico-system", SelfLink:"", UID:"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6bc978d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e", Pod:"calico-kube-controllers-7f6bc978d5-r28xt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28625e1eea8", MAC:"82:44:2f:78:53:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:38.751682 containerd[1464]: 2025-09-13 00:26:38.714 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e" Namespace="calico-system" Pod="calico-kube-controllers-7f6bc978d5-r28xt" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:38.765573 containerd[1464]: time="2025-09-13T00:26:38.764592621Z" level=info msg="CreateContainer within sandbox \"73811a68938aa66ca92c63cc645b544c553e440316ead954d817afed527c5579\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"96f0b2adf2f546c080fbec6a8e924d9d077b53e264d9114ed9d5322fab657a05\"" Sep 13 00:26:38.785040 containerd[1464]: time="2025-09-13T00:26:38.784994207Z" level=info msg="StartContainer for \"96f0b2adf2f546c080fbec6a8e924d9d077b53e264d9114ed9d5322fab657a05\"" Sep 13 00:26:38.800886 systemd[1]: Started cri-containerd-14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d.scope - libcontainer container 14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d. Sep 13 00:26:38.936131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019992592.mount: Deactivated successfully. Sep 13 00:26:39.086954 containerd[1464]: time="2025-09-13T00:26:39.084705523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:39.086954 containerd[1464]: time="2025-09-13T00:26:39.084809137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:39.086954 containerd[1464]: time="2025-09-13T00:26:39.084829338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:39.086954 containerd[1464]: time="2025-09-13T00:26:39.084948284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:39.088511 systemd[1]: Started cri-containerd-96f0b2adf2f546c080fbec6a8e924d9d077b53e264d9114ed9d5322fab657a05.scope - libcontainer container 96f0b2adf2f546c080fbec6a8e924d9d077b53e264d9114ed9d5322fab657a05. Sep 13 00:26:39.105139 containerd[1464]: time="2025-09-13T00:26:39.101523825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-94fww,Uid:1340fb7a-e0db-4806-8c10-89545a7ba6fe,Namespace:kube-system,Attempt:1,} returns sandbox id \"14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d\"" Sep 13 00:26:39.129811 containerd[1464]: time="2025-09-13T00:26:39.129524566Z" level=info msg="CreateContainer within sandbox \"14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:38.964 [INFO][4500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:38.966 [INFO][4500] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" iface="eth0" netns="/var/run/netns/cni-0bccead2-b88a-a596-d0c0-4cf6b5fc1981" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:38.966 [INFO][4500] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" iface="eth0" netns="/var/run/netns/cni-0bccead2-b88a-a596-d0c0-4cf6b5fc1981" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:38.970 [INFO][4500] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" iface="eth0" netns="/var/run/netns/cni-0bccead2-b88a-a596-d0c0-4cf6b5fc1981" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:38.970 [INFO][4500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:38.971 [INFO][4500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.147 [INFO][4551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.148 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.148 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.164 [WARNING][4551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.164 [INFO][4551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.170 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:39.201842 containerd[1464]: 2025-09-13 00:26:39.178 [INFO][4500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:39.203077 containerd[1464]: time="2025-09-13T00:26:39.202920449Z" level=info msg="TearDown network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" successfully" Sep 13 00:26:39.203077 containerd[1464]: time="2025-09-13T00:26:39.202960356Z" level=info msg="StopPodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" returns successfully" Sep 13 00:26:39.209463 containerd[1464]: time="2025-09-13T00:26:39.209372996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-vb86r,Uid:305f5d4c-806f-42ee-82af-c8e6eacc2f74,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:26:39.222979 systemd[1]: Started cri-containerd-bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e.scope - libcontainer container bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e. Sep 13 00:26:39.226098 containerd[1464]: time="2025-09-13T00:26:39.225369719Z" level=info msg="CreateContainer within sandbox \"14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50b9c1f1574b67a3a054f861f2cdf6c4b18ddb4594eaaf1c1b6e0b07730ef2e4\"" Sep 13 00:26:39.228924 containerd[1464]: time="2025-09-13T00:26:39.228884630Z" level=info msg="StartContainer for \"50b9c1f1574b67a3a054f861f2cdf6c4b18ddb4594eaaf1c1b6e0b07730ef2e4\"" Sep 13 00:26:39.300615 systemd-networkd[1371]: cali03a0f67b447: Gained IPv6LL Sep 13 00:26:39.425057 systemd[1]: Started cri-containerd-50b9c1f1574b67a3a054f861f2cdf6c4b18ddb4594eaaf1c1b6e0b07730ef2e4.scope - libcontainer container 50b9c1f1574b67a3a054f861f2cdf6c4b18ddb4594eaaf1c1b6e0b07730ef2e4. Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.067 [INFO][4507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.067 [INFO][4507] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" iface="eth0" netns="/var/run/netns/cni-27861478-d1dd-4bd2-d01c-d88b7942ce87" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.068 [INFO][4507] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" iface="eth0" netns="/var/run/netns/cni-27861478-d1dd-4bd2-d01c-d88b7942ce87" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.081 [INFO][4507] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" iface="eth0" netns="/var/run/netns/cni-27861478-d1dd-4bd2-d01c-d88b7942ce87" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.081 [INFO][4507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.081 [INFO][4507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.404 [INFO][4588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.405 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.405 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.444 [WARNING][4588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.444 [INFO][4588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.447 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:39.457957 containerd[1464]: 2025-09-13 00:26:39.452 [INFO][4507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:39.458666 containerd[1464]: time="2025-09-13T00:26:39.458015772Z" level=info msg="TearDown network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" successfully" Sep 13 00:26:39.458666 containerd[1464]: time="2025-09-13T00:26:39.458050798Z" level=info msg="StopPodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" returns successfully" Sep 13 00:26:39.460467 containerd[1464]: time="2025-09-13T00:26:39.459900196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hj2qq,Uid:c497ee4f-d0c8-467d-9216-5d2e88dee8c7,Namespace:kube-system,Attempt:1,}" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.326 [INFO][4502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.326 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" iface="eth0" netns="/var/run/netns/cni-e12bb22d-7da3-0941-75c6-5eed1e249901" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.333 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" iface="eth0" netns="/var/run/netns/cni-e12bb22d-7da3-0941-75c6-5eed1e249901" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.337 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" iface="eth0" netns="/var/run/netns/cni-e12bb22d-7da3-0941-75c6-5eed1e249901" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.338 [INFO][4502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.339 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.475 [INFO][4654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.477 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.477 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.518 [WARNING][4654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.519 [INFO][4654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.526 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:39.546086 containerd[1464]: 2025-09-13 00:26:39.533 [INFO][4502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:39.552521 containerd[1464]: time="2025-09-13T00:26:39.552155799Z" level=info msg="TearDown network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" successfully" Sep 13 00:26:39.552521 containerd[1464]: time="2025-09-13T00:26:39.552233979Z" level=info msg="StopPodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" returns successfully" Sep 13 00:26:39.554759 containerd[1464]: time="2025-09-13T00:26:39.553858746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h77dv,Uid:d1fcfa74-5cf5-4886-8c7c-add2cf297c71,Namespace:calico-system,Attempt:1,}" Sep 13 00:26:39.601206 containerd[1464]: time="2025-09-13T00:26:39.601150334Z" level=info msg="StartContainer for \"50b9c1f1574b67a3a054f861f2cdf6c4b18ddb4594eaaf1c1b6e0b07730ef2e4\" returns successfully" Sep 13 00:26:39.714789 containerd[1464]: time="2025-09-13T00:26:39.713046680Z" level=info msg="StartContainer for \"96f0b2adf2f546c080fbec6a8e924d9d077b53e264d9114ed9d5322fab657a05\" returns successfully" Sep 13 00:26:39.747067 systemd-networkd[1371]: cali2d9c9355024: Link UP Sep 13 00:26:39.748100 systemd-networkd[1371]: cali2d9c9355024: Gained carrier Sep 13 00:26:39.749477 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.356 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0 calico-apiserver-54854fc87c- calico-apiserver 305f5d4c-806f-42ee-82af-c8e6eacc2f74 966 0 2025-09-13 00:26:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54854fc87c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf calico-apiserver-54854fc87c-vb86r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d9c9355024 [] [] }} ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.357 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.517 [INFO][4657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" HandleID="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.518 [INFO][4657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" HandleID="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"calico-apiserver-54854fc87c-vb86r", "timestamp":"2025-09-13 00:26:39.517499156 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.518 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.527 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.527 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.560 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.575 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.630 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.636 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.654 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.658 [INFO][4657] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.670 [INFO][4657] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.685 [INFO][4657] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.722 [INFO][4657] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.198/26] block=192.168.81.192/26 handle="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.723 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.198/26] handle="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.723 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:39.828258 containerd[1464]: 2025-09-13 00:26:39.723 [INFO][4657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.198/26] IPv6=[] ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" HandleID="k8s-pod-network.adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.829479 containerd[1464]: 2025-09-13 00:26:39.729 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"305f5d4c-806f-42ee-82af-c8e6eacc2f74", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"calico-apiserver-54854fc87c-vb86r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d9c9355024", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:39.829479 containerd[1464]: 2025-09-13 00:26:39.731 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.198/32] ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.829479 containerd[1464]: 2025-09-13 00:26:39.732 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d9c9355024 ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.829479 containerd[1464]: 2025-09-13 00:26:39.752 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.829479 containerd[1464]: 2025-09-13 00:26:39.755 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"305f5d4c-806f-42ee-82af-c8e6eacc2f74", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e", Pod:"calico-apiserver-54854fc87c-vb86r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d9c9355024", MAC:"56:a4:97:4d:cc:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:39.829479 containerd[1464]: 2025-09-13 00:26:39.815 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e" Namespace="calico-apiserver" Pod="calico-apiserver-54854fc87c-vb86r" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:39.878973 systemd-networkd[1371]: cali175ba461d46: Gained IPv6LL Sep 13 00:26:39.917250 containerd[1464]: time="2025-09-13T00:26:39.914343434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f6bc978d5-r28xt,Uid:5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e\"" Sep 13 00:26:39.938379 systemd[1]: run-netns-cni\x2de12bb22d\x2d7da3\x2d0941\x2d75c6\x2d5eed1e249901.mount: Deactivated successfully. Sep 13 00:26:39.938534 systemd[1]: run-netns-cni\x2d27861478\x2dd1dd\x2d4bd2\x2dd01c\x2dd88b7942ce87.mount: Deactivated successfully. Sep 13 00:26:39.938640 systemd[1]: run-netns-cni\x2d0bccead2\x2db88a\x2da596\x2dd0c0\x2d4cf6b5fc1981.mount: Deactivated successfully. Sep 13 00:26:39.979269 containerd[1464]: time="2025-09-13T00:26:39.978137657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:39.979269 containerd[1464]: time="2025-09-13T00:26:39.978223152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:39.979269 containerd[1464]: time="2025-09-13T00:26:39.978248446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:39.979269 containerd[1464]: time="2025-09-13T00:26:39.978375621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:40.069152 systemd[1]: Started cri-containerd-adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e.scope - libcontainer container adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e. Sep 13 00:26:40.112530 kubelet[2592]: I0913 00:26:40.112434 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cf77cb774-f7fj2" podStartSLOduration=3.089979409 podStartE2EDuration="8.112408218s" podCreationTimestamp="2025-09-13 00:26:32 +0000 UTC" firstStartedPulling="2025-09-13 00:26:33.638905965 +0000 UTC m=+41.175182638" lastFinishedPulling="2025-09-13 00:26:38.66133477 +0000 UTC m=+46.197611447" observedRunningTime="2025-09-13 00:26:40.10485334 +0000 UTC m=+47.641130027" watchObservedRunningTime="2025-09-13 00:26:40.112408218 +0000 UTC m=+47.648684956" Sep 13 00:26:40.130156 systemd-networkd[1371]: cali11d0c11adee: Link UP Sep 13 00:26:40.133269 systemd-networkd[1371]: cali11d0c11adee: Gained carrier Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.697 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0 coredns-7c65d6cfc9- kube-system c497ee4f-d0c8-467d-9216-5d2e88dee8c7 967 0 2025-09-13 00:25:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf coredns-7c65d6cfc9-hj2qq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11d0c11adee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.697 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.900 [INFO][4736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" HandleID="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.905 [INFO][4736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" HandleID="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"coredns-7c65d6cfc9-hj2qq", "timestamp":"2025-09-13 00:26:39.900965223 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.905 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.905 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.905 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.954 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.966 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.981 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:39.987 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.008 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.009 [INFO][4736] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.016 [INFO][4736] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1 Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.041 [INFO][4736] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.074 [INFO][4736] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.199/26] block=192.168.81.192/26 handle="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.080 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.199/26] handle="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.080 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:40.212771 containerd[1464]: 2025-09-13 00:26:40.080 [INFO][4736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.199/26] IPv6=[] ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" HandleID="k8s-pod-network.880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.216128 containerd[1464]: 2025-09-13 00:26:40.092 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c497ee4f-d0c8-467d-9216-5d2e88dee8c7", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"coredns-7c65d6cfc9-hj2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11d0c11adee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:40.216128 containerd[1464]: 2025-09-13 00:26:40.097 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.199/32] ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.216128 containerd[1464]: 2025-09-13 00:26:40.097 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11d0c11adee ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.216128 containerd[1464]: 2025-09-13 00:26:40.137 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.216128 containerd[1464]: 2025-09-13 00:26:40.147 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c497ee4f-d0c8-467d-9216-5d2e88dee8c7", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1", Pod:"coredns-7c65d6cfc9-hj2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11d0c11adee", MAC:"16:6a:d3:92:b3:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:40.216128 containerd[1464]: 2025-09-13 00:26:40.183 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hj2qq" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:40.225098 kubelet[2592]: I0913 00:26:40.224882 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-94fww" podStartSLOduration=41.224854247 podStartE2EDuration="41.224854247s" podCreationTimestamp="2025-09-13 00:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:26:40.188046377 +0000 UTC m=+47.724323063" watchObservedRunningTime="2025-09-13 00:26:40.224854247 +0000 UTC m=+47.761130933" Sep 13 00:26:40.260228 systemd-networkd[1371]: cali28625e1eea8: Gained IPv6LL Sep 13 00:26:40.301918 containerd[1464]: time="2025-09-13T00:26:40.298236313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:40.301918 containerd[1464]: time="2025-09-13T00:26:40.298477530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:40.301918 containerd[1464]: time="2025-09-13T00:26:40.298520881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:40.301918 containerd[1464]: time="2025-09-13T00:26:40.298802025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:40.355572 systemd-networkd[1371]: cali8de8767d4fc: Link UP Sep 13 00:26:40.362623 systemd-networkd[1371]: cali8de8767d4fc: Gained carrier Sep 13 00:26:40.368969 systemd[1]: Started cri-containerd-880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1.scope - libcontainer container 880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1. Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:39.836 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0 goldmane-7988f88666- calico-system d1fcfa74-5cf5-4886-8c7c-add2cf297c71 970 0 2025-09-13 00:26:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf goldmane-7988f88666-h77dv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8de8767d4fc [] [] }} ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:39.836 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.103 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" HandleID="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.103 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" HandleID="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000433b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", "pod":"goldmane-7988f88666-h77dv", "timestamp":"2025-09-13 00:26:40.103702865 +0000 UTC"}, Hostname:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.104 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.104 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.104 [INFO][4758] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf' Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.188 [INFO][4758] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.210 [INFO][4758] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.238 [INFO][4758] ipam/ipam.go 511: Trying affinity for 192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.246 [INFO][4758] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.253 [INFO][4758] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.192/26 host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.253 [INFO][4758] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.192/26 handle="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.264 [INFO][4758] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937 Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.278 [INFO][4758] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.192/26 handle="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.306 [INFO][4758] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.200/26] block=192.168.81.192/26 handle="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.306 [INFO][4758] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.200/26] handle="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" host="ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf" Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.306 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:40.418331 containerd[1464]: 2025-09-13 00:26:40.306 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.200/26] IPv6=[] ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" HandleID="k8s-pod-network.3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.420257 containerd[1464]: 2025-09-13 00:26:40.327 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1fcfa74-5cf5-4886-8c7c-add2cf297c71", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"", Pod:"goldmane-7988f88666-h77dv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8de8767d4fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:40.420257 containerd[1464]: 2025-09-13 00:26:40.330 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.200/32] ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.420257 containerd[1464]: 2025-09-13 00:26:40.334 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8de8767d4fc ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.420257 containerd[1464]: 2025-09-13 00:26:40.386 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.420257 containerd[1464]: 2025-09-13 00:26:40.390 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1fcfa74-5cf5-4886-8c7c-add2cf297c71", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937", Pod:"goldmane-7988f88666-h77dv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8de8767d4fc", MAC:"96:6a:ea:fa:29:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:40.420257 containerd[1464]: 2025-09-13 00:26:40.412 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937" Namespace="calico-system" Pod="goldmane-7988f88666-h77dv" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:40.481321 containerd[1464]: time="2025-09-13T00:26:40.481192329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:26:40.481523 containerd[1464]: time="2025-09-13T00:26:40.481283757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:26:40.481523 containerd[1464]: time="2025-09-13T00:26:40.481320594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:40.481523 containerd[1464]: time="2025-09-13T00:26:40.481447212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:26:40.568044 systemd[1]: Started cri-containerd-3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937.scope - libcontainer container 3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937. Sep 13 00:26:40.576134 containerd[1464]: time="2025-09-13T00:26:40.571949683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hj2qq,Uid:c497ee4f-d0c8-467d-9216-5d2e88dee8c7,Namespace:kube-system,Attempt:1,} returns sandbox id \"880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1\"" Sep 13 00:26:40.576134 containerd[1464]: time="2025-09-13T00:26:40.575947820Z" level=info msg="CreateContainer within sandbox \"880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:26:40.604586 containerd[1464]: time="2025-09-13T00:26:40.604534005Z" level=info msg="CreateContainer within sandbox \"880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bb917c39ac97a594484787dd5fd83870b3fbf8208946bee1b6bbcb57ee4fde4\"" Sep 13 00:26:40.607823 containerd[1464]: time="2025-09-13T00:26:40.606040330Z" level=info msg="StartContainer for \"8bb917c39ac97a594484787dd5fd83870b3fbf8208946bee1b6bbcb57ee4fde4\"" Sep 13 00:26:40.679301 systemd[1]: Started cri-containerd-8bb917c39ac97a594484787dd5fd83870b3fbf8208946bee1b6bbcb57ee4fde4.scope - libcontainer container 8bb917c39ac97a594484787dd5fd83870b3fbf8208946bee1b6bbcb57ee4fde4. Sep 13 00:26:40.761934 containerd[1464]: time="2025-09-13T00:26:40.761881708Z" level=info msg="StartContainer for \"8bb917c39ac97a594484787dd5fd83870b3fbf8208946bee1b6bbcb57ee4fde4\" returns successfully" Sep 13 00:26:40.900965 systemd-networkd[1371]: cali2d9c9355024: Gained IPv6LL Sep 13 00:26:40.904612 containerd[1464]: time="2025-09-13T00:26:40.902802736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54854fc87c-vb86r,Uid:305f5d4c-806f-42ee-82af-c8e6eacc2f74,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e\"" Sep 13 00:26:41.029953 containerd[1464]: time="2025-09-13T00:26:41.029411281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-h77dv,Uid:d1fcfa74-5cf5-4886-8c7c-add2cf297c71,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937\"" Sep 13 00:26:41.192878 kubelet[2592]: I0913 00:26:41.191021 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hj2qq" podStartSLOduration=42.190996345 podStartE2EDuration="42.190996345s" podCreationTimestamp="2025-09-13 00:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:26:41.190325415 +0000 UTC m=+48.726602100" watchObservedRunningTime="2025-09-13 00:26:41.190996345 +0000 UTC m=+48.727273031" Sep 13 00:26:41.350575 systemd-networkd[1371]: cali11d0c11adee: Gained IPv6LL Sep 13 00:26:42.372707 systemd-networkd[1371]: cali8de8767d4fc: Gained IPv6LL Sep 13 00:26:42.480681 containerd[1464]: time="2025-09-13T00:26:42.480611618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:42.483505 containerd[1464]: time="2025-09-13T00:26:42.483308879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:26:42.484474 containerd[1464]: time="2025-09-13T00:26:42.484243301Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:42.491151 containerd[1464]: time="2025-09-13T00:26:42.491102121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:42.493431 containerd[1464]: time="2025-09-13T00:26:42.492613125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.827719571s" Sep 13 00:26:42.493431 containerd[1464]: time="2025-09-13T00:26:42.492662485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:26:42.496960 containerd[1464]: time="2025-09-13T00:26:42.496536902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:26:42.498529 containerd[1464]: time="2025-09-13T00:26:42.498475023Z" level=info msg="CreateContainer within sandbox \"cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:26:42.531514 containerd[1464]: time="2025-09-13T00:26:42.531459326Z" level=info msg="CreateContainer within sandbox \"cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e02a67de457bb29b06d7a7845f6726a597195810d8f560937cb609fce06a37a6\"" Sep 13 00:26:42.533768 containerd[1464]: time="2025-09-13T00:26:42.532379761Z" level=info msg="StartContainer for \"e02a67de457bb29b06d7a7845f6726a597195810d8f560937cb609fce06a37a6\"" Sep 13 00:26:42.594694 kubelet[2592]: I0913 00:26:42.594651 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:26:42.614147 systemd[1]: Started cri-containerd-e02a67de457bb29b06d7a7845f6726a597195810d8f560937cb609fce06a37a6.scope - libcontainer container e02a67de457bb29b06d7a7845f6726a597195810d8f560937cb609fce06a37a6. Sep 13 00:26:42.729887 containerd[1464]: time="2025-09-13T00:26:42.729734585Z" level=info msg="StartContainer for \"e02a67de457bb29b06d7a7845f6726a597195810d8f560937cb609fce06a37a6\" returns successfully" Sep 13 00:26:43.199339 kubelet[2592]: I0913 00:26:43.199252 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54854fc87c-64p78" podStartSLOduration=29.190176321 podStartE2EDuration="35.199226662s" podCreationTimestamp="2025-09-13 00:26:08 +0000 UTC" firstStartedPulling="2025-09-13 00:26:36.486313869 +0000 UTC m=+44.022590543" lastFinishedPulling="2025-09-13 00:26:42.49536421 +0000 UTC m=+50.031640884" observedRunningTime="2025-09-13 00:26:43.199125197 +0000 UTC m=+50.735401868" watchObservedRunningTime="2025-09-13 00:26:43.199226662 +0000 UTC m=+50.735503484" Sep 13 00:26:43.533213 systemd[1]: run-containerd-runc-k8s.io-7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f-runc.ZziFvQ.mount: Deactivated successfully. Sep 13 00:26:43.831226 containerd[1464]: time="2025-09-13T00:26:43.831059554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:43.833656 containerd[1464]: time="2025-09-13T00:26:43.833587198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:26:43.834275 containerd[1464]: time="2025-09-13T00:26:43.834221429Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:43.841768 containerd[1464]: time="2025-09-13T00:26:43.839766115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:43.842421 containerd[1464]: time="2025-09-13T00:26:43.842362515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.345778179s" Sep 13 00:26:43.842516 containerd[1464]: time="2025-09-13T00:26:43.842427110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:26:43.844330 containerd[1464]: time="2025-09-13T00:26:43.844074699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:26:43.847272 containerd[1464]: time="2025-09-13T00:26:43.847233254Z" level=info msg="CreateContainer within sandbox \"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:26:43.888587 containerd[1464]: time="2025-09-13T00:26:43.888398145Z" level=info msg="CreateContainer within sandbox \"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b339524b302fd7f42438a8642820878c72a50f454bd524ff0d24e58779b583cd\"" Sep 13 00:26:43.890985 containerd[1464]: time="2025-09-13T00:26:43.889537698Z" level=info msg="StartContainer for \"b339524b302fd7f42438a8642820878c72a50f454bd524ff0d24e58779b583cd\"" Sep 13 00:26:43.956991 systemd[1]: Started cri-containerd-b339524b302fd7f42438a8642820878c72a50f454bd524ff0d24e58779b583cd.scope - libcontainer container b339524b302fd7f42438a8642820878c72a50f454bd524ff0d24e58779b583cd. Sep 13 00:26:44.008529 containerd[1464]: time="2025-09-13T00:26:44.008474422Z" level=info msg="StartContainer for \"b339524b302fd7f42438a8642820878c72a50f454bd524ff0d24e58779b583cd\" returns successfully" Sep 13 00:26:44.185635 kubelet[2592]: I0913 00:26:44.185149 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:26:45.073975 ntpd[1433]: Listen normally on 8 vxlan.calico 192.168.81.192:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 8 vxlan.calico 192.168.81.192:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 9 cali678a2011879 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 10 cali5c5353a232d [fe80::ecee:eeff:feee:eeee%5]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 11 cali03a0f67b447 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 12 vxlan.calico [fe80::64a6:4ff:fe23:76d%7]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 13 cali175ba461d46 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 14 cali28625e1eea8 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 15 cali2d9c9355024 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 16 cali11d0c11adee [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:26:45.077276 ntpd[1433]: 13 Sep 00:26:45 ntpd[1433]: Listen normally on 17 cali8de8767d4fc [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:26:45.074095 ntpd[1433]: Listen normally on 9 cali678a2011879 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 13 00:26:45.074166 ntpd[1433]: Listen normally on 10 cali5c5353a232d [fe80::ecee:eeff:feee:eeee%5]:123 Sep 13 00:26:45.074220 ntpd[1433]: Listen normally on 11 cali03a0f67b447 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 13 00:26:45.074277 ntpd[1433]: Listen normally on 12 vxlan.calico [fe80::64a6:4ff:fe23:76d%7]:123 Sep 13 00:26:45.074333 ntpd[1433]: Listen normally on 13 cali175ba461d46 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 13 00:26:45.074386 ntpd[1433]: Listen normally on 14 cali28625e1eea8 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 13 00:26:45.074437 ntpd[1433]: Listen normally on 15 cali2d9c9355024 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 13 00:26:45.074501 ntpd[1433]: Listen normally on 16 cali11d0c11adee [fe80::ecee:eeff:feee:eeee%13]:123 Sep 13 00:26:45.074554 ntpd[1433]: Listen normally on 17 cali8de8767d4fc [fe80::ecee:eeff:feee:eeee%14]:123 Sep 13 00:26:48.020305 containerd[1464]: time="2025-09-13T00:26:48.020091799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:48.022760 containerd[1464]: time="2025-09-13T00:26:48.021357113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:26:48.025869 containerd[1464]: time="2025-09-13T00:26:48.025542482Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:48.032684 containerd[1464]: time="2025-09-13T00:26:48.032560515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:48.033828 containerd[1464]: time="2025-09-13T00:26:48.033585263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.189467169s" Sep 13 00:26:48.033828 containerd[1464]: time="2025-09-13T00:26:48.033638864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:26:48.039179 containerd[1464]: time="2025-09-13T00:26:48.036566110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:26:48.058806 containerd[1464]: time="2025-09-13T00:26:48.058422381Z" level=info msg="CreateContainer within sandbox \"bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:26:48.087111 containerd[1464]: time="2025-09-13T00:26:48.087062068Z" level=info msg="CreateContainer within sandbox \"bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750\"" Sep 13 00:26:48.089090 containerd[1464]: time="2025-09-13T00:26:48.088172673Z" level=info msg="StartContainer for \"8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750\"" Sep 13 00:26:48.162990 systemd[1]: Started cri-containerd-8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750.scope - libcontainer container 8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750. Sep 13 00:26:48.247472 containerd[1464]: time="2025-09-13T00:26:48.246728923Z" level=info msg="StartContainer for \"8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750\" returns successfully" Sep 13 00:26:48.272854 containerd[1464]: time="2025-09-13T00:26:48.272623084Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:48.281775 containerd[1464]: time="2025-09-13T00:26:48.278476232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:26:48.285033 containerd[1464]: time="2025-09-13T00:26:48.284976227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 248.260504ms" Sep 13 00:26:48.285209 containerd[1464]: time="2025-09-13T00:26:48.285187841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:26:48.288673 containerd[1464]: time="2025-09-13T00:26:48.288639681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:26:48.290073 containerd[1464]: time="2025-09-13T00:26:48.290036117Z" level=info msg="CreateContainer within sandbox \"adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:26:48.314120 containerd[1464]: time="2025-09-13T00:26:48.314076685Z" level=info msg="CreateContainer within sandbox \"adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"695f3dff10781cf5135a70e8aaa456363d7f388f87968089b85a3bd80a420d35\"" Sep 13 00:26:48.315114 containerd[1464]: time="2025-09-13T00:26:48.315082696Z" level=info msg="StartContainer for \"695f3dff10781cf5135a70e8aaa456363d7f388f87968089b85a3bd80a420d35\"" Sep 13 00:26:48.390958 systemd[1]: Started cri-containerd-695f3dff10781cf5135a70e8aaa456363d7f388f87968089b85a3bd80a420d35.scope - libcontainer container 695f3dff10781cf5135a70e8aaa456363d7f388f87968089b85a3bd80a420d35. Sep 13 00:26:48.660631 containerd[1464]: time="2025-09-13T00:26:48.660464339Z" level=info msg="StartContainer for \"695f3dff10781cf5135a70e8aaa456363d7f388f87968089b85a3bd80a420d35\" returns successfully" Sep 13 00:26:49.302964 systemd[1]: run-containerd-runc-k8s.io-8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750-runc.Z0uAjE.mount: Deactivated successfully. Sep 13 00:26:49.407886 kubelet[2592]: I0913 00:26:49.406156 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f6bc978d5-r28xt" podStartSLOduration=27.303174586 podStartE2EDuration="35.406131739s" podCreationTimestamp="2025-09-13 00:26:14 +0000 UTC" firstStartedPulling="2025-09-13 00:26:39.932360649 +0000 UTC m=+47.468637325" lastFinishedPulling="2025-09-13 00:26:48.035317816 +0000 UTC m=+55.571594478" observedRunningTime="2025-09-13 00:26:49.317088489 +0000 UTC m=+56.853365175" watchObservedRunningTime="2025-09-13 00:26:49.406131739 +0000 UTC m=+56.942408424" Sep 13 00:26:49.997386 kubelet[2592]: I0913 00:26:49.997287 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54854fc87c-vb86r" podStartSLOduration=34.620499608 podStartE2EDuration="41.997258169s" podCreationTimestamp="2025-09-13 00:26:08 +0000 UTC" firstStartedPulling="2025-09-13 00:26:40.909860781 +0000 UTC m=+48.446137456" lastFinishedPulling="2025-09-13 00:26:48.286619338 +0000 UTC m=+55.822896017" observedRunningTime="2025-09-13 00:26:49.409892499 +0000 UTC m=+56.946169185" watchObservedRunningTime="2025-09-13 00:26:49.997258169 +0000 UTC m=+57.533534854" Sep 13 00:26:51.475263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011690541.mount: Deactivated successfully. Sep 13 00:26:52.607424 containerd[1464]: time="2025-09-13T00:26:52.607360507Z" level=info msg="StopPodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\"" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.849 [WARNING][5258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c497ee4f-d0c8-467d-9216-5d2e88dee8c7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1", Pod:"coredns-7c65d6cfc9-hj2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11d0c11adee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.850 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.851 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" iface="eth0" netns="" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.851 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.851 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.990 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.992 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:52.992 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:53.026 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:53.027 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:53.034 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:53.049827 containerd[1464]: 2025-09-13 00:26:53.040 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.053422 containerd[1464]: time="2025-09-13T00:26:53.052891522Z" level=info msg="TearDown network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" successfully" Sep 13 00:26:53.053422 containerd[1464]: time="2025-09-13T00:26:53.052948341Z" level=info msg="StopPodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" returns successfully" Sep 13 00:26:53.054272 containerd[1464]: time="2025-09-13T00:26:53.053662185Z" level=info msg="RemovePodSandbox for \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\"" Sep 13 00:26:53.054272 containerd[1464]: time="2025-09-13T00:26:53.053709598Z" level=info msg="Forcibly stopping sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\"" Sep 13 00:26:53.074796 containerd[1464]: time="2025-09-13T00:26:53.074323668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:53.078850 containerd[1464]: time="2025-09-13T00:26:53.078780871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:26:53.081765 containerd[1464]: time="2025-09-13T00:26:53.080387005Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:53.089990 containerd[1464]: time="2025-09-13T00:26:53.089951664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:53.091978 containerd[1464]: time="2025-09-13T00:26:53.091935970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.802780081s" Sep 13 00:26:53.092122 containerd[1464]: time="2025-09-13T00:26:53.092100351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:26:53.111240 containerd[1464]: time="2025-09-13T00:26:53.111191616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:26:53.113708 containerd[1464]: time="2025-09-13T00:26:53.113563831Z" level=info msg="CreateContainer within sandbox \"3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:26:53.151783 containerd[1464]: time="2025-09-13T00:26:53.151601449Z" level=info msg="CreateContainer within sandbox \"3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c\"" Sep 13 00:26:53.154406 containerd[1464]: time="2025-09-13T00:26:53.153426828Z" level=info msg="StartContainer for \"df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c\"" Sep 13 00:26:53.153971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229652912.mount: Deactivated successfully. Sep 13 00:26:53.269955 systemd[1]: Started cri-containerd-df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c.scope - libcontainer container df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c. Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.274 [WARNING][5286] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c497ee4f-d0c8-467d-9216-5d2e88dee8c7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"880b9dea9f09de635c613630e3aac9f44989437428355f609ef0cc2af50c95e1", Pod:"coredns-7c65d6cfc9-hj2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11d0c11adee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.274 [INFO][5286] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.274 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" iface="eth0" netns="" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.274 [INFO][5286] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.274 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.341 [INFO][5311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.341 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.341 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.359 [WARNING][5311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.359 [INFO][5311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" HandleID="k8s-pod-network.320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--hj2qq-eth0" Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.361 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:53.367225 containerd[1464]: 2025-09-13 00:26:53.363 [INFO][5286] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d" Sep 13 00:26:53.367225 containerd[1464]: time="2025-09-13T00:26:53.366958777Z" level=info msg="TearDown network for sandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" successfully" Sep 13 00:26:53.374677 containerd[1464]: time="2025-09-13T00:26:53.374618586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:53.374823 containerd[1464]: time="2025-09-13T00:26:53.374719772Z" level=info msg="RemovePodSandbox \"320cc2e1902a94dcc1ba36da1408021d63c1fe72b9cee83e77f3d90b9a025c9d\" returns successfully" Sep 13 00:26:53.377777 containerd[1464]: time="2025-09-13T00:26:53.375450709Z" level=info msg="StopPodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\"" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.490 [WARNING][5326] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1fcfa74-5cf5-4886-8c7c-add2cf297c71", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937", Pod:"goldmane-7988f88666-h77dv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8de8767d4fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.491 [INFO][5326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.491 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" iface="eth0" netns="" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.491 [INFO][5326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.491 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.542 [INFO][5339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.542 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.542 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.552 [WARNING][5339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.553 [INFO][5339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.555 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:53.560577 containerd[1464]: 2025-09-13 00:26:53.557 [INFO][5326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.561423 containerd[1464]: time="2025-09-13T00:26:53.560677932Z" level=info msg="TearDown network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" successfully" Sep 13 00:26:53.561423 containerd[1464]: time="2025-09-13T00:26:53.560713110Z" level=info msg="StopPodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" returns successfully" Sep 13 00:26:53.562779 containerd[1464]: time="2025-09-13T00:26:53.561815449Z" level=info msg="RemovePodSandbox for \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\"" Sep 13 00:26:53.562779 containerd[1464]: time="2025-09-13T00:26:53.561877409Z" level=info msg="Forcibly stopping sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\"" Sep 13 00:26:53.636117 containerd[1464]: time="2025-09-13T00:26:53.633961898Z" level=info msg="StartContainer for \"df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c\" returns successfully" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.685 [WARNING][5354] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1fcfa74-5cf5-4886-8c7c-add2cf297c71", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"3d89fee8cb21cb9a6e00fc378161e90952a9cffd601a6860c4f2fb009e298937", Pod:"goldmane-7988f88666-h77dv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8de8767d4fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.685 [INFO][5354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.685 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" iface="eth0" netns="" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.685 [INFO][5354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.685 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.739 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.739 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.739 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.751 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.751 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" HandleID="k8s-pod-network.47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-goldmane--7988f88666--h77dv-eth0" Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.755 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:53.762123 containerd[1464]: 2025-09-13 00:26:53.759 [INFO][5354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90" Sep 13 00:26:53.765271 containerd[1464]: time="2025-09-13T00:26:53.762338801Z" level=info msg="TearDown network for sandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" successfully" Sep 13 00:26:53.781388 containerd[1464]: time="2025-09-13T00:26:53.779376695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:53.781388 containerd[1464]: time="2025-09-13T00:26:53.779638127Z" level=info msg="RemovePodSandbox \"47780787ed9d5ec9accbaca7899d97bde893bd109e53fde98a0105c630753f90\" returns successfully" Sep 13 00:26:53.781388 containerd[1464]: time="2025-09-13T00:26:53.780578389Z" level=info msg="StopPodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\"" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:53.893 [WARNING][5384] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0", GenerateName:"calico-kube-controllers-7f6bc978d5-", Namespace:"calico-system", SelfLink:"", UID:"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6bc978d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e", Pod:"calico-kube-controllers-7f6bc978d5-r28xt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28625e1eea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:53.894 [INFO][5384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:53.894 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" iface="eth0" netns="" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:53.895 [INFO][5384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:53.895 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.016 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.016 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.016 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.032 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.032 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.039 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:54.049011 containerd[1464]: 2025-09-13 00:26:54.042 [INFO][5384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.049011 containerd[1464]: time="2025-09-13T00:26:54.048607219Z" level=info msg="TearDown network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" successfully" Sep 13 00:26:54.049011 containerd[1464]: time="2025-09-13T00:26:54.048648107Z" level=info msg="StopPodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" returns successfully" Sep 13 00:26:54.054384 containerd[1464]: time="2025-09-13T00:26:54.049826580Z" level=info msg="RemovePodSandbox for \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\"" Sep 13 00:26:54.054384 containerd[1464]: time="2025-09-13T00:26:54.049868114Z" level=info msg="Forcibly stopping sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\"" Sep 13 00:26:54.142007 systemd[1]: run-containerd-runc-k8s.io-8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750-runc.DVytbi.mount: Deactivated successfully. Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.153 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0", GenerateName:"calico-kube-controllers-7f6bc978d5-", Namespace:"calico-system", SelfLink:"", UID:"5ebdab1b-1f4e-4fa2-ba06-30bf94a930b9", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f6bc978d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"bdf3c4f2656c1ebfc3d11ced5bebe2cc6b1a29ef8218dc0e904e83d230dfa96e", Pod:"calico-kube-controllers-7f6bc978d5-r28xt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali28625e1eea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.155 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.155 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" iface="eth0" netns="" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.155 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.155 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.228 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.231 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.231 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.255 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.256 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" HandleID="k8s-pod-network.22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--kube--controllers--7f6bc978d5--r28xt-eth0" Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.269 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:54.298237 containerd[1464]: 2025-09-13 00:26:54.276 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294" Sep 13 00:26:54.299162 containerd[1464]: time="2025-09-13T00:26:54.298292845Z" level=info msg="TearDown network for sandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" successfully" Sep 13 00:26:54.326777 containerd[1464]: time="2025-09-13T00:26:54.325236921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:54.326777 containerd[1464]: time="2025-09-13T00:26:54.325343731Z" level=info msg="RemovePodSandbox \"22167157664e2079160ecc008e015506c6084342f226d65c316c47eb84f97294\" returns successfully" Sep 13 00:26:54.326777 containerd[1464]: time="2025-09-13T00:26:54.326406506Z" level=info msg="StopPodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\"" Sep 13 00:26:54.340433 kubelet[2592]: I0913 00:26:54.340153 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-h77dv" podStartSLOduration=29.266332545 podStartE2EDuration="41.340127232s" podCreationTimestamp="2025-09-13 00:26:13 +0000 UTC" firstStartedPulling="2025-09-13 00:26:41.035632066 +0000 UTC m=+48.571908741" lastFinishedPulling="2025-09-13 00:26:53.109426753 +0000 UTC m=+60.645703428" observedRunningTime="2025-09-13 00:26:54.331290253 +0000 UTC m=+61.867566939" watchObservedRunningTime="2025-09-13 00:26:54.340127232 +0000 UTC m=+61.876403918" Sep 13 00:26:54.382703 systemd[1]: run-containerd-runc-k8s.io-df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c-runc.ib3Lev.mount: Deactivated successfully. Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.506 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"305f5d4c-806f-42ee-82af-c8e6eacc2f74", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e", Pod:"calico-apiserver-54854fc87c-vb86r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d9c9355024", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.508 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.508 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" iface="eth0" netns="" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.508 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.508 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.648 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.650 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.650 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.665 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.665 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.671 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:54.691500 containerd[1464]: 2025-09-13 00:26:54.682 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.691500 containerd[1464]: time="2025-09-13T00:26:54.691329999Z" level=info msg="TearDown network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" successfully" Sep 13 00:26:54.691500 containerd[1464]: time="2025-09-13T00:26:54.691363023Z" level=info msg="StopPodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" returns successfully" Sep 13 00:26:54.694490 containerd[1464]: time="2025-09-13T00:26:54.693988672Z" level=info msg="RemovePodSandbox for \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\"" Sep 13 00:26:54.694490 containerd[1464]: time="2025-09-13T00:26:54.694031618Z" level=info msg="Forcibly stopping sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\"" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.824 [WARNING][5502] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"305f5d4c-806f-42ee-82af-c8e6eacc2f74", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"adb613b1f22e5588efc640e2e67fd2513e4eeb69eb0591418f8e9747eeb7071e", Pod:"calico-apiserver-54854fc87c-vb86r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d9c9355024", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.825 [INFO][5502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.825 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" iface="eth0" netns="" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.825 [INFO][5502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.825 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.940 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.946 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.946 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.967 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.967 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" HandleID="k8s-pod-network.e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--vb86r-eth0" Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.970 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:54.984219 containerd[1464]: 2025-09-13 00:26:54.974 [INFO][5502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b" Sep 13 00:26:54.984219 containerd[1464]: time="2025-09-13T00:26:54.982109619Z" level=info msg="TearDown network for sandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" successfully" Sep 13 00:26:54.995907 containerd[1464]: time="2025-09-13T00:26:54.995841686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:54.996152 containerd[1464]: time="2025-09-13T00:26:54.996115352Z" level=info msg="RemovePodSandbox \"e2bb58aa24fdaddfcdb1c9c3ec62a9cbc8f2360470a368f30b19c37a7f5dbd1b\" returns successfully" Sep 13 00:26:54.998230 containerd[1464]: time="2025-09-13T00:26:54.998194357Z" level=info msg="StopPodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\"" Sep 13 00:26:55.345934 containerd[1464]: time="2025-09-13T00:26:55.345803001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:55.348850 containerd[1464]: time="2025-09-13T00:26:55.348785920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:26:55.351005 containerd[1464]: time="2025-09-13T00:26:55.350961427Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:55.371688 containerd[1464]: time="2025-09-13T00:26:55.371508250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:55.385172 containerd[1464]: time="2025-09-13T00:26:55.384517476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.271634242s" Sep 13 00:26:55.385172 containerd[1464]: time="2025-09-13T00:26:55.384574527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:26:55.395879 containerd[1464]: time="2025-09-13T00:26:55.395842762Z" level=info msg="CreateContainer within sandbox \"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.157 [WARNING][5524] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0ab9580-fead-4e9f-abaa-e8f12c4c312a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace", Pod:"calico-apiserver-54854fc87c-64p78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5353a232d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.162 [INFO][5524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.162 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" iface="eth0" netns="" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.163 [INFO][5524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.163 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.311 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.311 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.311 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.364 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.365 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.375 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:55.396584 containerd[1464]: 2025-09-13 00:26:55.384 [INFO][5524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.398352 containerd[1464]: time="2025-09-13T00:26:55.396811217Z" level=info msg="TearDown network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" successfully" Sep 13 00:26:55.398352 containerd[1464]: time="2025-09-13T00:26:55.396836781Z" level=info msg="StopPodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" returns successfully" Sep 13 00:26:55.399135 containerd[1464]: time="2025-09-13T00:26:55.398841244Z" level=info msg="RemovePodSandbox for \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\"" Sep 13 00:26:55.399135 containerd[1464]: time="2025-09-13T00:26:55.398880245Z" level=info msg="Forcibly stopping sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\"" Sep 13 00:26:55.424852 containerd[1464]: time="2025-09-13T00:26:55.424794477Z" level=info msg="CreateContainer within sandbox \"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e0bea0b67b39448f0775e6ad7f60e4cddae66ff0cbdb7de7eadfc994972fec5e\"" Sep 13 00:26:55.428025 containerd[1464]: time="2025-09-13T00:26:55.427944697Z" level=info msg="StartContainer for \"e0bea0b67b39448f0775e6ad7f60e4cddae66ff0cbdb7de7eadfc994972fec5e\"" Sep 13 00:26:55.546041 systemd[1]: Started cri-containerd-e0bea0b67b39448f0775e6ad7f60e4cddae66ff0cbdb7de7eadfc994972fec5e.scope - libcontainer container e0bea0b67b39448f0775e6ad7f60e4cddae66ff0cbdb7de7eadfc994972fec5e. Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.620 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0", GenerateName:"calico-apiserver-54854fc87c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0ab9580-fead-4e9f-abaa-e8f12c4c312a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54854fc87c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"cd6cf107dc812f52f71fbde5f1d64a62788105abd792f679fcb9c501a0995ace", Pod:"calico-apiserver-54854fc87c-64p78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c5353a232d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.620 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.620 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" iface="eth0" netns="" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.620 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.620 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.678 [INFO][5593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.678 [INFO][5593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.678 [INFO][5593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.700 [WARNING][5593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.700 [INFO][5593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" HandleID="k8s-pod-network.3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-calico--apiserver--54854fc87c--64p78-eth0" Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.703 [INFO][5593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:55.708758 containerd[1464]: 2025-09-13 00:26:55.706 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391" Sep 13 00:26:55.710324 containerd[1464]: time="2025-09-13T00:26:55.708882983Z" level=info msg="TearDown network for sandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" successfully" Sep 13 00:26:55.716250 containerd[1464]: time="2025-09-13T00:26:55.715865037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:55.716250 containerd[1464]: time="2025-09-13T00:26:55.715992807Z" level=info msg="RemovePodSandbox \"3638ae431aea714b673f80f0e5e5fcbb3e228759a232468650321204071c0391\" returns successfully" Sep 13 00:26:55.717954 containerd[1464]: time="2025-09-13T00:26:55.717134638Z" level=info msg="StopPodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\"" Sep 13 00:26:55.853770 containerd[1464]: time="2025-09-13T00:26:55.852500449Z" level=info msg="StartContainer for \"e0bea0b67b39448f0775e6ad7f60e4cddae66ff0cbdb7de7eadfc994972fec5e\" returns successfully" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.796 [WARNING][5615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1340fb7a-e0db-4806-8c10-89545a7ba6fe", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d", Pod:"coredns-7c65d6cfc9-94fww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali175ba461d46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.797 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.797 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" iface="eth0" netns="" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.797 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.797 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.857 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.857 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.857 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.875 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.875 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.879 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:55.883798 containerd[1464]: 2025-09-13 00:26:55.881 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:55.884658 containerd[1464]: time="2025-09-13T00:26:55.883858065Z" level=info msg="TearDown network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" successfully" Sep 13 00:26:55.884658 containerd[1464]: time="2025-09-13T00:26:55.883892651Z" level=info msg="StopPodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" returns successfully" Sep 13 00:26:55.886781 containerd[1464]: time="2025-09-13T00:26:55.885003651Z" level=info msg="RemovePodSandbox for \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\"" Sep 13 00:26:55.886781 containerd[1464]: time="2025-09-13T00:26:55.885048893Z" level=info msg="Forcibly stopping sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\"" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:55.963 [WARNING][5646] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1340fb7a-e0db-4806-8c10-89545a7ba6fe", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"14217628b635ba6f88279b3f4b9749f57f3671d655e66e3aa4fb1262cbee253d", Pod:"coredns-7c65d6cfc9-94fww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali175ba461d46", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:55.963 [INFO][5646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:55.963 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" iface="eth0" netns="" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:55.963 [INFO][5646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:55.963 [INFO][5646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.095 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.096 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.096 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.111 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.111 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" HandleID="k8s-pod-network.2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-coredns--7c65d6cfc9--94fww-eth0" Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.115 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:56.123529 containerd[1464]: 2025-09-13 00:26:56.119 [INFO][5646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a" Sep 13 00:26:56.124475 containerd[1464]: time="2025-09-13T00:26:56.123596464Z" level=info msg="TearDown network for sandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" successfully" Sep 13 00:26:56.132632 containerd[1464]: time="2025-09-13T00:26:56.132560946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:56.132841 containerd[1464]: time="2025-09-13T00:26:56.132668156Z" level=info msg="RemovePodSandbox \"2337ebe507bc6e3fb8f7d0c3b2b288cd9a9761cda9ec6824f70c329c483e0c1a\" returns successfully" Sep 13 00:26:56.133804 containerd[1464]: time="2025-09-13T00:26:56.133551458Z" level=info msg="StopPodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\"" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.197 [WARNING][5691] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.198 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.198 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" iface="eth0" netns="" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.198 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.198 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.258 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.258 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.258 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.272 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.272 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.274 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:56.278846 containerd[1464]: 2025-09-13 00:26:56.277 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.279933 containerd[1464]: time="2025-09-13T00:26:56.279695286Z" level=info msg="TearDown network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" successfully" Sep 13 00:26:56.279933 containerd[1464]: time="2025-09-13T00:26:56.279792322Z" level=info msg="StopPodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" returns successfully" Sep 13 00:26:56.280438 containerd[1464]: time="2025-09-13T00:26:56.280401678Z" level=info msg="RemovePodSandbox for \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\"" Sep 13 00:26:56.281063 containerd[1464]: time="2025-09-13T00:26:56.280788463Z" level=info msg="Forcibly stopping sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\"" Sep 13 00:26:56.324478 kubelet[2592]: I0913 00:26:56.323712 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mnfjg" podStartSLOduration=24.757010435 podStartE2EDuration="42.323667916s" podCreationTimestamp="2025-09-13 00:26:14 +0000 UTC" firstStartedPulling="2025-09-13 00:26:37.826865279 +0000 UTC m=+45.363141955" lastFinishedPulling="2025-09-13 00:26:55.39352276 +0000 UTC m=+62.929799436" observedRunningTime="2025-09-13 00:26:56.318933364 +0000 UTC m=+63.855210050" watchObservedRunningTime="2025-09-13 00:26:56.323667916 +0000 UTC m=+63.859944602" Sep 13 00:26:56.359566 systemd[1]: run-containerd-runc-k8s.io-df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c-runc.FPNKwP.mount: Deactivated successfully. Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.377 [WARNING][5712] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" WorkloadEndpoint="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.378 [INFO][5712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.378 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" iface="eth0" netns="" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.378 [INFO][5712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.378 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.417 [INFO][5720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.417 [INFO][5720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.417 [INFO][5720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.427 [WARNING][5720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.427 [INFO][5720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" HandleID="k8s-pod-network.8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-whisker--c6f9fbd87--ffq8d-eth0" Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.429 [INFO][5720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:56.434654 containerd[1464]: 2025-09-13 00:26:56.431 [INFO][5712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee" Sep 13 00:26:56.434654 containerd[1464]: time="2025-09-13T00:26:56.434360713Z" level=info msg="TearDown network for sandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" successfully" Sep 13 00:26:56.440983 containerd[1464]: time="2025-09-13T00:26:56.440404573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:56.440983 containerd[1464]: time="2025-09-13T00:26:56.440499576Z" level=info msg="RemovePodSandbox \"8f5087c9f6532f3f17cb7c6a9b2edd7b8fa376f5266875c7b529734b49fc70ee\" returns successfully" Sep 13 00:26:56.441160 containerd[1464]: time="2025-09-13T00:26:56.441112134Z" level=info msg="StopPodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\"" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.521 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f1027a4-b75f-4b5f-b382-64bdc48ceda4", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9", Pod:"csi-node-driver-mnfjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a0f67b447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.523 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.523 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" iface="eth0" netns="" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.523 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.524 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.590 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.591 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.591 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.608 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.608 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.611 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:56.618015 containerd[1464]: 2025-09-13 00:26:56.614 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.618015 containerd[1464]: time="2025-09-13T00:26:56.617845004Z" level=info msg="TearDown network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" successfully" Sep 13 00:26:56.618015 containerd[1464]: time="2025-09-13T00:26:56.617882405Z" level=info msg="StopPodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" returns successfully" Sep 13 00:26:56.621477 containerd[1464]: time="2025-09-13T00:26:56.620348597Z" level=info msg="RemovePodSandbox for \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\"" Sep 13 00:26:56.621477 containerd[1464]: time="2025-09-13T00:26:56.620391194Z" level=info msg="Forcibly stopping sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\"" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.717 [WARNING][5756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f1027a4-b75f-4b5f-b382-64bdc48ceda4", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 26, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-nightly-20250912-2100-f36e1cb93cf8302b18cf", ContainerID:"f4753523e5aa355a442bcf9302361a6fff127b6c718a529dc7168f0133db34a9", Pod:"csi-node-driver-mnfjg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03a0f67b447", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.718 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.718 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" iface="eth0" netns="" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.718 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.718 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.762 [INFO][5763] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.763 [INFO][5763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.763 [INFO][5763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.773 [WARNING][5763] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.773 [INFO][5763] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" HandleID="k8s-pod-network.3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Workload="ci--4081--3--5--nightly--20250912--2100--f36e1cb93cf8302b18cf-k8s-csi--node--driver--mnfjg-eth0" Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.775 [INFO][5763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:26:56.781302 containerd[1464]: 2025-09-13 00:26:56.776 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99" Sep 13 00:26:56.781302 containerd[1464]: time="2025-09-13T00:26:56.779169836Z" level=info msg="TearDown network for sandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" successfully" Sep 13 00:26:56.787848 containerd[1464]: time="2025-09-13T00:26:56.787724481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:26:56.788777 containerd[1464]: time="2025-09-13T00:26:56.788052268Z" level=info msg="RemovePodSandbox \"3a9c77a2cea6a61f39ab335162920f334353312f6393c375c341e890ea09bc99\" returns successfully" Sep 13 00:26:56.793013 kubelet[2592]: I0913 00:26:56.792988 2592 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:26:56.793230 kubelet[2592]: I0913 00:26:56.793209 2592 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:27:02.425113 systemd[1]: run-containerd-runc-k8s.io-8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750-runc.RNJ6zG.mount: Deactivated successfully. Sep 13 00:27:05.543172 systemd[1]: Started sshd@9-10.128.0.49:22-147.75.109.163:51284.service - OpenSSH per-connection server daemon (147.75.109.163:51284). Sep 13 00:27:05.930850 sshd[5804]: Accepted publickey for core from 147.75.109.163 port 51284 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:05.935799 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:05.948370 systemd-logind[1446]: New session 10 of user core. Sep 13 00:27:05.954963 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:27:06.362345 sshd[5804]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:06.374040 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:27:06.375105 systemd[1]: sshd@9-10.128.0.49:22-147.75.109.163:51284.service: Deactivated successfully. Sep 13 00:27:06.380611 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:27:06.385107 systemd-logind[1446]: Removed session 10. Sep 13 00:27:11.438210 systemd[1]: Started sshd@10-10.128.0.49:22-147.75.109.163:54100.service - OpenSSH per-connection server daemon (147.75.109.163:54100). Sep 13 00:27:11.843894 sshd[5821]: Accepted publickey for core from 147.75.109.163 port 54100 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:11.844704 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:11.858674 systemd-logind[1446]: New session 11 of user core. Sep 13 00:27:11.863291 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:27:12.271122 sshd[5821]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:12.280078 systemd[1]: sshd@10-10.128.0.49:22-147.75.109.163:54100.service: Deactivated successfully. Sep 13 00:27:12.286411 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:27:12.288310 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:27:12.290234 systemd-logind[1446]: Removed session 11. Sep 13 00:27:12.641285 systemd[1]: run-containerd-runc-k8s.io-7b73bd4c846bc2d9f64a8d83eecf2443738ef5fb8029b6198dd6006050dad38f-runc.Jp21Ks.mount: Deactivated successfully. Sep 13 00:27:15.424993 kubelet[2592]: I0913 00:27:15.424367 2592 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:27:17.348869 systemd[1]: Started sshd@11-10.128.0.49:22-147.75.109.163:54102.service - OpenSSH per-connection server daemon (147.75.109.163:54102). Sep 13 00:27:17.752385 sshd[5858]: Accepted publickey for core from 147.75.109.163 port 54102 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:17.753368 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:17.764964 systemd-logind[1446]: New session 12 of user core. Sep 13 00:27:17.773995 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:27:18.182081 sshd[5858]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:18.194281 systemd[1]: sshd@11-10.128.0.49:22-147.75.109.163:54102.service: Deactivated successfully. Sep 13 00:27:18.199399 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:27:18.202939 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:27:18.206509 systemd-logind[1446]: Removed session 12. Sep 13 00:27:18.257108 systemd[1]: Started sshd@12-10.128.0.49:22-147.75.109.163:54114.service - OpenSSH per-connection server daemon (147.75.109.163:54114). Sep 13 00:27:18.645312 sshd[5872]: Accepted publickey for core from 147.75.109.163 port 54114 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:18.646224 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:18.655231 systemd-logind[1446]: New session 13 of user core. Sep 13 00:27:18.660959 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:27:19.129026 sshd[5872]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:19.136494 systemd[1]: sshd@12-10.128.0.49:22-147.75.109.163:54114.service: Deactivated successfully. Sep 13 00:27:19.140266 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:27:19.142200 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:27:19.146690 systemd-logind[1446]: Removed session 13. Sep 13 00:27:19.205221 systemd[1]: Started sshd@13-10.128.0.49:22-147.75.109.163:54122.service - OpenSSH per-connection server daemon (147.75.109.163:54122). Sep 13 00:27:19.602234 sshd[5883]: Accepted publickey for core from 147.75.109.163 port 54122 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:19.604476 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:19.610097 systemd-logind[1446]: New session 14 of user core. Sep 13 00:27:19.615102 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:27:19.967087 sshd[5883]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:19.972249 systemd[1]: sshd@13-10.128.0.49:22-147.75.109.163:54122.service: Deactivated successfully. Sep 13 00:27:19.978515 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:27:19.979947 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:27:19.982600 systemd-logind[1446]: Removed session 14. Sep 13 00:27:25.048201 systemd[1]: Started sshd@14-10.128.0.49:22-147.75.109.163:50484.service - OpenSSH per-connection server daemon (147.75.109.163:50484). Sep 13 00:27:25.462341 sshd[5927]: Accepted publickey for core from 147.75.109.163 port 50484 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:25.465433 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:25.475587 systemd-logind[1446]: New session 15 of user core. Sep 13 00:27:25.480984 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:27:25.903330 sshd[5927]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:25.911310 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:27:25.912321 systemd[1]: sshd@14-10.128.0.49:22-147.75.109.163:50484.service: Deactivated successfully. Sep 13 00:27:25.918371 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:27:25.925126 systemd-logind[1446]: Removed session 15. Sep 13 00:27:30.977889 systemd[1]: Started sshd@15-10.128.0.49:22-147.75.109.163:48874.service - OpenSSH per-connection server daemon (147.75.109.163:48874). Sep 13 00:27:31.374874 sshd[5964]: Accepted publickey for core from 147.75.109.163 port 48874 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:31.377361 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:31.390866 systemd-logind[1446]: New session 16 of user core. Sep 13 00:27:31.394958 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:27:31.809092 sshd[5964]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:31.817571 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:27:31.818358 systemd[1]: sshd@15-10.128.0.49:22-147.75.109.163:48874.service: Deactivated successfully. Sep 13 00:27:31.824967 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:27:31.830049 systemd-logind[1446]: Removed session 16. Sep 13 00:27:36.878149 systemd[1]: Started sshd@16-10.128.0.49:22-147.75.109.163:48878.service - OpenSSH per-connection server daemon (147.75.109.163:48878). Sep 13 00:27:37.250792 sshd[5976]: Accepted publickey for core from 147.75.109.163 port 48878 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:37.252535 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:37.259995 systemd-logind[1446]: New session 17 of user core. Sep 13 00:27:37.266013 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:27:37.603941 sshd[5976]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:37.609274 systemd[1]: sshd@16-10.128.0.49:22-147.75.109.163:48878.service: Deactivated successfully. Sep 13 00:27:37.613082 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:27:37.614261 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:27:37.616063 systemd-logind[1446]: Removed session 17. Sep 13 00:27:42.676143 systemd[1]: Started sshd@17-10.128.0.49:22-147.75.109.163:39558.service - OpenSSH per-connection server daemon (147.75.109.163:39558). Sep 13 00:27:43.073700 sshd[6009]: Accepted publickey for core from 147.75.109.163 port 39558 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:43.074593 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:43.089565 systemd-logind[1446]: New session 18 of user core. Sep 13 00:27:43.093152 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:27:43.519707 sshd[6009]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:43.528477 systemd[1]: sshd@17-10.128.0.49:22-147.75.109.163:39558.service: Deactivated successfully. Sep 13 00:27:43.533182 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:27:43.536833 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:27:43.542429 systemd-logind[1446]: Removed session 18. Sep 13 00:27:43.597921 systemd[1]: Started sshd@18-10.128.0.49:22-147.75.109.163:39568.service - OpenSSH per-connection server daemon (147.75.109.163:39568). Sep 13 00:27:44.010476 sshd[6023]: Accepted publickey for core from 147.75.109.163 port 39568 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:44.011661 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:44.022232 systemd-logind[1446]: New session 19 of user core. Sep 13 00:27:44.031952 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:27:44.572172 sshd[6023]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:44.581153 systemd[1]: sshd@18-10.128.0.49:22-147.75.109.163:39568.service: Deactivated successfully. Sep 13 00:27:44.586314 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:27:44.588059 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:27:44.591041 systemd-logind[1446]: Removed session 19. Sep 13 00:27:44.648269 systemd[1]: Started sshd@19-10.128.0.49:22-147.75.109.163:39570.service - OpenSSH per-connection server daemon (147.75.109.163:39570). Sep 13 00:27:45.051380 sshd[6034]: Accepted publickey for core from 147.75.109.163 port 39570 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:45.055454 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:45.064879 systemd-logind[1446]: New session 20 of user core. Sep 13 00:27:45.070949 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:27:48.694086 sshd[6034]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:48.702438 systemd[1]: sshd@19-10.128.0.49:22-147.75.109.163:39570.service: Deactivated successfully. Sep 13 00:27:48.704815 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:27:48.709585 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:27:48.716247 systemd-logind[1446]: Removed session 20. Sep 13 00:27:48.771160 systemd[1]: Started sshd@20-10.128.0.49:22-147.75.109.163:39580.service - OpenSSH per-connection server daemon (147.75.109.163:39580). Sep 13 00:27:49.171248 sshd[6049]: Accepted publickey for core from 147.75.109.163 port 39580 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:49.173091 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:49.184812 systemd-logind[1446]: New session 21 of user core. Sep 13 00:27:49.187970 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:27:49.763092 sshd[6049]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:49.768610 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:27:49.771503 systemd[1]: sshd@20-10.128.0.49:22-147.75.109.163:39580.service: Deactivated successfully. Sep 13 00:27:49.776082 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:27:49.784357 systemd-logind[1446]: Removed session 21. Sep 13 00:27:49.833201 systemd[1]: Started sshd@21-10.128.0.49:22-147.75.109.163:39596.service - OpenSSH per-connection server daemon (147.75.109.163:39596). Sep 13 00:27:50.212866 sshd[6063]: Accepted publickey for core from 147.75.109.163 port 39596 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:50.214685 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:50.222297 systemd-logind[1446]: New session 22 of user core. Sep 13 00:27:50.231004 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:27:50.658097 sshd[6063]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:50.672080 systemd[1]: sshd@21-10.128.0.49:22-147.75.109.163:39596.service: Deactivated successfully. Sep 13 00:27:50.678842 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:27:50.685361 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:27:50.688102 systemd-logind[1446]: Removed session 22. Sep 13 00:27:55.365384 systemd[1]: run-containerd-runc-k8s.io-df9c4d60e12d6f2c24273d677aa6212e094a9acfeb0006f154650fe1a96d649c-runc.21APuO.mount: Deactivated successfully. Sep 13 00:27:55.739864 systemd[1]: Started sshd@22-10.128.0.49:22-147.75.109.163:54424.service - OpenSSH per-connection server daemon (147.75.109.163:54424). Sep 13 00:27:56.137832 sshd[6138]: Accepted publickey for core from 147.75.109.163 port 54424 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:27:56.138883 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:27:56.148691 systemd-logind[1446]: New session 23 of user core. Sep 13 00:27:56.157872 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:27:56.538115 sshd[6138]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:56.548202 systemd[1]: sshd@22-10.128.0.49:22-147.75.109.163:54424.service: Deactivated successfully. Sep 13 00:27:56.551995 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:27:56.553457 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:27:56.556165 systemd-logind[1446]: Removed session 23. Sep 13 00:28:01.611216 systemd[1]: Started sshd@23-10.128.0.49:22-147.75.109.163:48068.service - OpenSSH per-connection server daemon (147.75.109.163:48068). Sep 13 00:28:02.001775 sshd[6164]: Accepted publickey for core from 147.75.109.163 port 48068 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:28:02.003278 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:02.012808 systemd-logind[1446]: New session 24 of user core. Sep 13 00:28:02.022249 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:28:02.410080 sshd[6164]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:02.425948 systemd[1]: run-containerd-runc-k8s.io-8b61b79f251b4fd49f7a4ea72c3491c06475112cabb3c75a815ed22e4b98f750-runc.NxO06T.mount: Deactivated successfully. Sep 13 00:28:02.428895 systemd[1]: sshd@23-10.128.0.49:22-147.75.109.163:48068.service: Deactivated successfully. Sep 13 00:28:02.432717 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:28:02.435645 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:28:02.440361 systemd-logind[1446]: Removed session 24. Sep 13 00:28:07.477252 systemd[1]: Started sshd@24-10.128.0.49:22-147.75.109.163:48074.service - OpenSSH per-connection server daemon (147.75.109.163:48074). Sep 13 00:28:07.871947 sshd[6198]: Accepted publickey for core from 147.75.109.163 port 48074 ssh2: RSA SHA256:cpYSQwHRh7/Y0BPTCsQHf/D8GSQwXt/qeiwcToMKgNc Sep 13 00:28:07.874845 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:28:07.883361 systemd-logind[1446]: New session 25 of user core. Sep 13 00:28:07.890997 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:28:08.235817 sshd[6198]: pam_unix(sshd:session): session closed for user core Sep 13 00:28:08.242287 systemd[1]: sshd@24-10.128.0.49:22-147.75.109.163:48074.service: Deactivated successfully. Sep 13 00:28:08.245500 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:28:08.247253 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:28:08.248924 systemd-logind[1446]: Removed session 25.