Jul 15 23:57:02.180981 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:57:02.181036 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:57:02.181055 kernel: BIOS-provided physical RAM map: Jul 15 23:57:02.181068 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 15 23:57:02.181081 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 15 23:57:02.181095 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 15 23:57:02.181115 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 15 23:57:02.181130 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 15 23:57:02.181144 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jul 15 23:57:02.181158 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jul 15 23:57:02.181173 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jul 15 23:57:02.181187 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 15 23:57:02.181202 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 15 23:57:02.181217 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 15 23:57:02.181240 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 15 23:57:02.181256 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 15 23:57:02.181272 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 15 23:57:02.181288 kernel: NX (Execute Disable) protection: active Jul 15 23:57:02.181304 kernel: APIC: Static calls initialized Jul 15 23:57:02.181320 kernel: efi: EFI v2.7 by EDK II Jul 15 23:57:02.181337 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jul 15 23:57:02.181354 kernel: random: crng init done Jul 15 23:57:02.181374 kernel: secureboot: Secure boot disabled Jul 15 23:57:02.181391 kernel: SMBIOS 2.4 present. Jul 15 23:57:02.181408 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jul 15 23:57:02.181424 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:57:02.181440 kernel: Hypervisor detected: KVM Jul 15 23:57:02.181456 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:57:02.181472 kernel: kvm-clock: using sched offset of 14969683451 cycles Jul 15 23:57:02.181490 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:57:02.181506 kernel: tsc: Detected 2299.998 MHz processor Jul 15 23:57:02.181523 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:57:02.181545 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:57:02.181561 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 15 23:57:02.181577 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jul 15 23:57:02.181593 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:57:02.181608 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 15 23:57:02.181624 kernel: Using GB pages for direct mapping Jul 15 23:57:02.181639 kernel: ACPI: Early table checksum verification disabled Jul 15 23:57:02.181654 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 15 23:57:02.181680 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 15 23:57:02.181697 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 15 23:57:02.181713 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 15 23:57:02.181730 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 15 23:57:02.181746 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jul 15 23:57:02.181763 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 15 23:57:02.181784 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 15 23:57:02.181802 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 15 23:57:02.181820 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 15 23:57:02.181838 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 15 23:57:02.181856 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 15 23:57:02.181899 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 15 23:57:02.181915 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 15 23:57:02.181943 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 15 23:57:02.181960 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 15 23:57:02.181980 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 15 23:57:02.182004 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 15 23:57:02.182021 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 15 23:57:02.182037 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 15 23:57:02.182054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 15 23:57:02.182070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 15 23:57:02.182087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 15 23:57:02.182105 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jul 15 23:57:02.182122 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jul 15 23:57:02.182142 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jul 15 23:57:02.182159 kernel: Zone ranges: Jul 15 23:57:02.182176 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:57:02.182192 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 23:57:02.182209 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:57:02.182225 kernel: Device empty Jul 15 23:57:02.182241 kernel: Movable zone start for each node Jul 15 23:57:02.182256 kernel: Early memory node ranges Jul 15 23:57:02.182272 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 15 23:57:02.182291 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 15 23:57:02.182307 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jul 15 23:57:02.182323 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jul 15 23:57:02.182345 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 15 23:57:02.182380 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:57:02.182396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 15 23:57:02.182425 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:57:02.182444 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 15 23:57:02.182458 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 15 23:57:02.182475 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jul 15 23:57:02.182497 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 15 23:57:02.182512 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 15 23:57:02.182535 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 23:57:02.182550 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:57:02.182566 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:57:02.182583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:57:02.182600 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:57:02.182618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:57:02.182640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:57:02.182658 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:57:02.182676 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:57:02.182693 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:57:02.182710 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:57:02.182727 kernel: CPU topo: Max. threads per core: 2 Jul 15 23:57:02.182745 kernel: CPU topo: Num. cores per package: 1 Jul 15 23:57:02.182761 kernel: CPU topo: Num. threads per package: 2 Jul 15 23:57:02.182778 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 23:57:02.182796 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 15 23:57:02.182817 kernel: Booting paravirtualized kernel on KVM Jul 15 23:57:02.182833 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:57:02.182850 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 23:57:02.182867 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 23:57:02.183041 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 23:57:02.183059 kernel: pcpu-alloc: [0] 0 1 Jul 15 23:57:02.183075 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:57:02.183092 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:57:02.183112 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:57:02.183145 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:57:02.183162 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 15 23:57:02.183179 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:57:02.183196 kernel: Fallback order for Node 0: 0 Jul 15 23:57:02.183212 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jul 15 23:57:02.183229 kernel: Policy zone: Normal Jul 15 23:57:02.183246 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:57:02.183262 kernel: software IO TLB: area num 2. Jul 15 23:57:02.183296 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:57:02.183314 kernel: Kernel/User page tables isolation: enabled Jul 15 23:57:02.183332 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:57:02.183353 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:57:02.183378 kernel: Dynamic Preempt: voluntary Jul 15 23:57:02.183396 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:57:02.183420 kernel: rcu: RCU event tracing is enabled. Jul 15 23:57:02.183438 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:57:02.183457 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:57:02.183478 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:57:02.183496 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:57:02.183514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:57:02.183538 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:57:02.183556 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:57:02.183575 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:57:02.183602 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:57:02.183625 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 23:57:02.183644 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:57:02.183662 kernel: Console: colour dummy device 80x25 Jul 15 23:57:02.183680 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:57:02.183697 kernel: ACPI: Core revision 20240827 Jul 15 23:57:02.183716 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:57:02.183733 kernel: x2apic enabled Jul 15 23:57:02.183751 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:57:02.183768 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 15 23:57:02.183786 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:57:02.183808 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 15 23:57:02.183827 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 15 23:57:02.183844 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 15 23:57:02.183886 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:57:02.183912 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 15 23:57:02.183929 kernel: Spectre V2 : Mitigation: IBRS Jul 15 23:57:02.183947 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:57:02.183964 kernel: RETBleed: Mitigation: IBRS Jul 15 23:57:02.183995 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:57:02.184014 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jul 15 23:57:02.184031 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:57:02.184050 kernel: MDS: Mitigation: Clear CPU buffers Jul 15 23:57:02.184068 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 23:57:02.184095 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 23:57:02.184113 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:57:02.184131 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:57:02.184149 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:57:02.184170 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:57:02.184188 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 23:57:02.184205 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:57:02.184223 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:57:02.184240 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:57:02.184258 kernel: landlock: Up and running. Jul 15 23:57:02.184276 kernel: SELinux: Initializing. Jul 15 23:57:02.184294 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:57:02.184320 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:57:02.184342 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 15 23:57:02.184360 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 15 23:57:02.184378 kernel: signal: max sigframe size: 1776 Jul 15 23:57:02.184395 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:57:02.184412 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:57:02.184431 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:57:02.184449 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 23:57:02.184490 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:57:02.184508 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:57:02.184529 kernel: .... node #0, CPUs: #1 Jul 15 23:57:02.184548 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 23:57:02.184567 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 23:57:02.184585 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:57:02.184603 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 15 23:57:02.184621 kernel: Memory: 7564012K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 290708K reserved, 0K cma-reserved) Jul 15 23:57:02.184639 kernel: devtmpfs: initialized Jul 15 23:57:02.184656 kernel: x86/mm: Memory block size: 128MB Jul 15 23:57:02.184677 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 15 23:57:02.184695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:57:02.184713 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:57:02.184730 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:57:02.184748 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:57:02.184765 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:57:02.184783 kernel: audit: type=2000 audit(1752623817.350:1): state=initialized audit_enabled=0 res=1 Jul 15 23:57:02.184801 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:57:02.184818 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:57:02.184839 kernel: cpuidle: using governor menu Jul 15 23:57:02.184857 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:57:02.184893 kernel: dca service started, version 1.12.1 Jul 15 23:57:02.184912 kernel: PCI: Using configuration type 1 for base access Jul 15 23:57:02.184929 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:57:02.184947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:57:02.184966 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:57:02.184983 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:57:02.185007 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:57:02.185029 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:57:02.185046 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:57:02.185064 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:57:02.185081 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 23:57:02.185098 kernel: ACPI: Interpreter enabled Jul 15 23:57:02.185115 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:57:02.185132 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:57:02.185150 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:57:02.185176 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 15 23:57:02.185204 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 15 23:57:02.185222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:57:02.185479 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:57:02.185669 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 23:57:02.185849 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 23:57:02.187019 kernel: PCI host bridge to bus 0000:00 Jul 15 23:57:02.187239 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:57:02.187416 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:57:02.187581 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:57:02.187745 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 15 23:57:02.187941 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:57:02.188163 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:57:02.188376 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:57:02.188588 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 15 23:57:02.188779 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 23:57:02.189005 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jul 15 23:57:02.189204 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 23:57:02.189409 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jul 15 23:57:02.189621 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:57:02.189825 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jul 15 23:57:02.191682 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jul 15 23:57:02.191927 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:57:02.192129 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jul 15 23:57:02.192314 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jul 15 23:57:02.192338 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:57:02.192357 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:57:02.192376 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:57:02.192400 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:57:02.192419 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 23:57:02.192438 kernel: iommu: Default domain type: Translated Jul 15 23:57:02.192457 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:57:02.192475 kernel: efivars: Registered efivars operations Jul 15 23:57:02.192494 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:57:02.192512 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:57:02.192531 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 15 23:57:02.192549 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 15 23:57:02.192571 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jul 15 23:57:02.192587 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 15 23:57:02.192605 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 15 23:57:02.192622 kernel: vgaarb: loaded Jul 15 23:57:02.192641 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:57:02.192660 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:57:02.192678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:57:02.192696 kernel: pnp: PnP ACPI init Jul 15 23:57:02.192715 kernel: pnp: PnP ACPI: found 7 devices Jul 15 23:57:02.192733 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:57:02.192756 kernel: NET: Registered PF_INET protocol family Jul 15 23:57:02.192775 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 23:57:02.192794 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 15 23:57:02.192812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:57:02.192831 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:57:02.192849 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 15 23:57:02.192868 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 15 23:57:02.193922 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:57:02.193943 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:57:02.193969 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:57:02.193997 kernel: NET: Registered PF_XDP protocol family Jul 15 23:57:02.194208 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:57:02.194389 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:57:02.194555 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:57:02.194718 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 15 23:57:02.195947 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 23:57:02.195991 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:57:02.196012 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 23:57:02.196031 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jul 15 23:57:02.196051 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 23:57:02.196069 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:57:02.196088 kernel: clocksource: Switched to clocksource tsc Jul 15 23:57:02.196107 kernel: Initialise system trusted keyrings Jul 15 23:57:02.196125 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 15 23:57:02.196144 kernel: Key type asymmetric registered Jul 15 23:57:02.196166 kernel: Asymmetric key parser 'x509' registered Jul 15 23:57:02.196185 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:57:02.196204 kernel: io scheduler mq-deadline registered Jul 15 23:57:02.196223 kernel: io scheduler kyber registered Jul 15 23:57:02.196242 kernel: io scheduler bfq registered Jul 15 23:57:02.196260 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:57:02.196279 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 15 23:57:02.196483 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 15 23:57:02.196507 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 15 23:57:02.196698 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 15 23:57:02.196723 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 15 23:57:02.197389 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 15 23:57:02.197420 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:57:02.197440 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:57:02.197459 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 15 23:57:02.197476 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 15 23:57:02.197495 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 15 23:57:02.197711 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 15 23:57:02.197736 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:57:02.197754 kernel: i8042: Warning: Keylock active Jul 15 23:57:02.197773 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:57:02.197791 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:57:02.198404 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 23:57:02.198590 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 23:57:02.198762 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T23:57:01 UTC (1752623821) Jul 15 23:57:02.200942 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 23:57:02.200974 kernel: intel_pstate: CPU model not supported Jul 15 23:57:02.201001 kernel: pstore: Using crash dump compression: deflate Jul 15 23:57:02.201021 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 23:57:02.201040 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:57:02.201059 kernel: Segment Routing with IPv6 Jul 15 23:57:02.201077 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:57:02.201096 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:57:02.201121 kernel: Key type dns_resolver registered Jul 15 23:57:02.201139 kernel: IPI shorthand broadcast: enabled Jul 15 23:57:02.201157 kernel: sched_clock: Marking stable (4101003836, 146052529)->(4298019340, -50962975) Jul 15 23:57:02.201176 kernel: registered taskstats version 1 Jul 15 23:57:02.201195 kernel: Loading compiled-in X.509 certificates Jul 15 23:57:02.201214 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:57:02.201233 kernel: Demotion targets for Node 0: null Jul 15 23:57:02.201252 kernel: Key type .fscrypt registered Jul 15 23:57:02.201270 kernel: Key type fscrypt-provisioning registered Jul 15 23:57:02.201293 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:57:02.201312 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:57:02.201331 kernel: ima: No architecture policies found Jul 15 23:57:02.201349 kernel: clk: Disabling unused clocks Jul 15 23:57:02.201367 kernel: Warning: unable to open an initial console. Jul 15 23:57:02.201386 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:57:02.201405 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:57:02.201424 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:57:02.201443 kernel: Run /init as init process Jul 15 23:57:02.201465 kernel: with arguments: Jul 15 23:57:02.201483 kernel: /init Jul 15 23:57:02.201501 kernel: with environment: Jul 15 23:57:02.201520 kernel: HOME=/ Jul 15 23:57:02.201538 kernel: TERM=linux Jul 15 23:57:02.201557 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:57:02.201577 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:57:02.201601 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:57:02.201625 systemd[1]: Detected virtualization google. Jul 15 23:57:02.201644 systemd[1]: Detected architecture x86-64. Jul 15 23:57:02.201663 systemd[1]: Running in initrd. Jul 15 23:57:02.201682 systemd[1]: No hostname configured, using default hostname. Jul 15 23:57:02.201702 systemd[1]: Hostname set to . Jul 15 23:57:02.201722 systemd[1]: Initializing machine ID from random generator. Jul 15 23:57:02.201741 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:57:02.201765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:57:02.201805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:57:02.201830 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:57:02.201852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:57:02.201924 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:57:02.201948 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:57:02.201975 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:57:02.202004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:57:02.202024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:57:02.202045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:57:02.202065 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:57:02.202086 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:57:02.202106 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:57:02.202130 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:57:02.202150 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:57:02.202171 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:57:02.202191 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:57:02.202212 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:57:02.202232 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:57:02.202252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:57:02.202273 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:57:02.202294 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:57:02.202319 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:57:02.202340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:57:02.202361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:57:02.202383 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:57:02.202403 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:57:02.202425 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:57:02.202446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:57:02.202466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:02.202491 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:57:02.202514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:57:02.202535 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:57:02.202555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:57:02.202609 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 23:57:02.202658 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:02.202680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:57:02.202705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:57:02.202727 systemd-journald[207]: Journal started Jul 15 23:57:02.202766 systemd-journald[207]: Runtime Journal (/run/log/journal/c7541205825348f8865aea581c0e525d) is 8M, max 148.9M, 140.9M free. Jul 15 23:57:02.176517 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 23:57:02.213910 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:57:02.225906 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:57:02.228895 kernel: Bridge firewalling registered Jul 15 23:57:02.229099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:57:02.230916 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 23:57:02.233049 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:57:02.234023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:57:02.242000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:57:02.265841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:57:02.265853 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:57:02.269335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:57:02.275089 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:57:02.279676 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:57:02.284362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:57:02.292048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:57:02.310152 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:57:02.364331 systemd-resolved[246]: Positive Trust Anchors: Jul 15 23:57:02.364707 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:57:02.364794 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:57:02.369750 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 15 23:57:02.371460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:57:02.383108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:57:02.433918 kernel: SCSI subsystem initialized Jul 15 23:57:02.445907 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:57:02.457924 kernel: iscsi: registered transport (tcp) Jul 15 23:57:02.483379 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:57:02.483450 kernel: QLogic iSCSI HBA Driver Jul 15 23:57:02.506209 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:57:02.524452 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:57:02.532068 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:57:02.591284 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:57:02.594450 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:57:02.652922 kernel: raid6: avx2x4 gen() 18561 MB/s Jul 15 23:57:02.669912 kernel: raid6: avx2x2 gen() 18531 MB/s Jul 15 23:57:02.687299 kernel: raid6: avx2x1 gen() 14499 MB/s Jul 15 23:57:02.687354 kernel: raid6: using algorithm avx2x4 gen() 18561 MB/s Jul 15 23:57:02.705286 kernel: raid6: .... xor() 8003 MB/s, rmw enabled Jul 15 23:57:02.705342 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:57:02.727914 kernel: xor: automatically using best checksumming function avx Jul 15 23:57:02.909913 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:57:02.918082 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:57:02.920399 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:57:02.952768 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 15 23:57:02.961467 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:57:02.966062 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:57:02.996184 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jul 15 23:57:03.026535 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:57:03.033358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:57:03.122633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:57:03.129799 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:57:03.239962 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:57:03.251897 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jul 15 23:57:03.255895 kernel: AES CTR mode by8 optimization enabled Jul 15 23:57:03.326532 kernel: scsi host0: Virtio SCSI HBA Jul 15 23:57:03.343119 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 15 23:57:03.353906 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 23:57:03.364375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:57:03.364582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:03.371057 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:03.374733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:03.387439 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:57:03.393346 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 15 23:57:03.393638 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 15 23:57:03.393856 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 15 23:57:03.394344 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 15 23:57:03.394578 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 23:57:03.409910 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:57:03.409967 kernel: GPT:17805311 != 25165823 Jul 15 23:57:03.409992 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:57:03.410015 kernel: GPT:17805311 != 25165823 Jul 15 23:57:03.410036 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:57:03.410058 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:57:03.412299 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 15 23:57:03.429569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:03.516334 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jul 15 23:57:03.516989 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:57:03.536685 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jul 15 23:57:03.548055 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jul 15 23:57:03.548298 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jul 15 23:57:03.567221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:57:03.567519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:57:03.572380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:57:03.577291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:57:03.583431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:57:03.603577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:57:03.616381 disk-uuid[607]: Primary Header is updated. Jul 15 23:57:03.616381 disk-uuid[607]: Secondary Entries is updated. Jul 15 23:57:03.616381 disk-uuid[607]: Secondary Header is updated. Jul 15 23:57:03.632132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:57:03.638540 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:57:03.656916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:57:04.672914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:57:04.675921 disk-uuid[608]: The operation has completed successfully. Jul 15 23:57:04.760167 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:57:04.760311 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:57:04.816224 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:57:04.834919 sh[629]: Success Jul 15 23:57:04.857900 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:57:04.857965 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:57:04.857992 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:57:04.870900 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 23:57:04.957797 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:57:04.962135 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:57:04.980871 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:57:05.002484 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:57:05.002564 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (641) Jul 15 23:57:05.005899 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:57:05.005959 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:05.005984 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:57:05.037323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:57:05.038103 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:57:05.041457 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:57:05.043484 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:57:05.053501 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:57:05.086905 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (674) Jul 15 23:57:05.091446 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:05.091491 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:05.091515 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:57:05.102898 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:05.104062 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:57:05.110785 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:57:05.193771 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:57:05.197280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:57:05.269745 systemd-networkd[810]: lo: Link UP Jul 15 23:57:05.269764 systemd-networkd[810]: lo: Gained carrier Jul 15 23:57:05.272023 systemd-networkd[810]: Enumeration completed Jul 15 23:57:05.272154 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:57:05.272647 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:57:05.272654 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:57:05.274243 systemd-networkd[810]: eth0: Link UP Jul 15 23:57:05.274250 systemd-networkd[810]: eth0: Gained carrier Jul 15 23:57:05.274265 systemd-networkd[810]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:57:05.291115 systemd-networkd[810]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:57:05.291133 systemd-networkd[810]: eth0: DHCPv4 address 10.128.0.36/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:57:05.295599 systemd[1]: Reached target network.target - Network. Jul 15 23:57:05.370128 ignition[733]: Ignition 2.21.0 Jul 15 23:57:05.370147 ignition[733]: Stage: fetch-offline Jul 15 23:57:05.373213 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:57:05.370204 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:05.378527 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:57:05.370218 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:05.370378 ignition[733]: parsed url from cmdline: "" Jul 15 23:57:05.370385 ignition[733]: no config URL provided Jul 15 23:57:05.370396 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:57:05.370420 ignition[733]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:57:05.370430 ignition[733]: failed to fetch config: resource requires networking Jul 15 23:57:05.370937 ignition[733]: Ignition finished successfully Jul 15 23:57:05.418516 ignition[819]: Ignition 2.21.0 Jul 15 23:57:05.418532 ignition[819]: Stage: fetch Jul 15 23:57:05.418785 ignition[819]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:05.418803 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:05.418986 ignition[819]: parsed url from cmdline: "" Jul 15 23:57:05.418993 ignition[819]: no config URL provided Jul 15 23:57:05.419003 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:57:05.419017 ignition[819]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:57:05.433127 unknown[819]: fetched base config from "system" Jul 15 23:57:05.419063 ignition[819]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 15 23:57:05.433140 unknown[819]: fetched base config from "system" Jul 15 23:57:05.422325 ignition[819]: GET result: OK Jul 15 23:57:05.433150 unknown[819]: fetched user config from "gcp" Jul 15 23:57:05.422493 ignition[819]: parsing config with SHA512: 10024573b5e3d692edd0c435818a59226220ddf1eebc08b39049e94ec5c47e4b6c89c117b33b0c56b3a1cf9f300c0322c0a4db7fa967f955417ca304f8210e6e Jul 15 23:57:05.436922 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:57:05.433653 ignition[819]: fetch: fetch complete Jul 15 23:57:05.442117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:57:05.433659 ignition[819]: fetch: fetch passed Jul 15 23:57:05.433719 ignition[819]: Ignition finished successfully Jul 15 23:57:05.487956 ignition[826]: Ignition 2.21.0 Jul 15 23:57:05.487974 ignition[826]: Stage: kargs Jul 15 23:57:05.488188 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:05.492429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:57:05.488205 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:05.497485 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:57:05.490326 ignition[826]: kargs: kargs passed Jul 15 23:57:05.490430 ignition[826]: Ignition finished successfully Jul 15 23:57:05.528406 ignition[833]: Ignition 2.21.0 Jul 15 23:57:05.528424 ignition[833]: Stage: disks Jul 15 23:57:05.531824 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:57:05.528675 ignition[833]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:05.535287 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:57:05.528692 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:05.541015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:57:05.529965 ignition[833]: disks: disks passed Jul 15 23:57:05.545026 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:57:05.530018 ignition[833]: Ignition finished successfully Jul 15 23:57:05.549006 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:57:05.553003 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:57:05.558267 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:57:05.605725 systemd-fsck[842]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 15 23:57:05.618334 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:57:05.624092 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:57:05.797913 kernel: EXT4-fs (sda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:57:05.798639 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:57:05.802100 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:57:05.804321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:57:05.819978 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:57:05.827248 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:57:05.827308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:57:05.827367 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:57:05.842268 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (850) Jul 15 23:57:05.842311 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:05.842338 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:05.842370 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:57:05.839328 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:57:05.846998 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:57:05.849268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:57:05.975094 initrd-setup-root[874]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:57:05.983661 initrd-setup-root[881]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:57:05.991911 initrd-setup-root[888]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:57:05.998127 initrd-setup-root[895]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:57:06.158773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:57:06.160790 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:57:06.175379 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:57:06.187953 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:57:06.190993 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:06.231226 ignition[962]: INFO : Ignition 2.21.0 Jul 15 23:57:06.231226 ignition[962]: INFO : Stage: mount Jul 15 23:57:06.231226 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:06.231226 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:06.232550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:57:06.252108 ignition[962]: INFO : mount: mount passed Jul 15 23:57:06.252108 ignition[962]: INFO : Ignition finished successfully Jul 15 23:57:06.239490 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:57:06.244471 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:57:06.268971 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:57:06.296909 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (975) Jul 15 23:57:06.300074 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:57:06.300135 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:57:06.300160 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:57:06.308618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:57:06.347404 ignition[992]: INFO : Ignition 2.21.0 Jul 15 23:57:06.347404 ignition[992]: INFO : Stage: files Jul 15 23:57:06.352002 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:06.352002 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:06.352002 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:57:06.362037 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:57:06.362037 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:57:06.362037 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:57:06.362037 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:57:06.362037 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:57:06.358679 unknown[992]: wrote ssh authorized keys file for user: core Jul 15 23:57:06.380050 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 23:57:06.380050 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 23:57:06.511406 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:57:06.770163 systemd-networkd[810]: eth0: Gained IPv6LL Jul 15 23:57:07.125809 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 23:57:07.125809 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:57:07.136076 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 23:57:07.562935 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:57:07.934245 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:57:07.934245 ignition[992]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:57:07.942005 ignition[992]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:57:07.942005 ignition[992]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:57:07.942005 ignition[992]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:57:07.942005 ignition[992]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:57:07.942005 ignition[992]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:57:07.942005 ignition[992]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:57:07.942005 ignition[992]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:57:07.942005 ignition[992]: INFO : files: files passed Jul 15 23:57:07.942005 ignition[992]: INFO : Ignition finished successfully Jul 15 23:57:07.943764 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:57:07.947770 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:57:07.969632 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:57:07.977467 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:57:07.977653 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:57:08.004943 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:57:08.009008 initrd-setup-root-after-ignition[1022]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:57:08.007632 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:57:08.019690 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:57:08.009788 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:57:08.019562 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:57:08.087392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:57:08.087551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:57:08.093545 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:57:08.095324 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:57:08.099403 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:57:08.101540 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:57:08.130038 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:57:08.135893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:57:08.161764 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:57:08.162171 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:57:08.166369 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:57:08.171690 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:57:08.172139 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:57:08.180358 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:57:08.183482 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:57:08.187459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:57:08.191485 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:57:08.195502 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:57:08.200465 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:57:08.204437 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:57:08.208416 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:57:08.212785 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:57:08.217544 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:57:08.221462 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:57:08.225415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:57:08.225908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:57:08.236177 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:57:08.236790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:57:08.241371 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:57:08.241516 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:57:08.245339 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:57:08.245558 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:57:08.258064 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:57:08.258445 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:57:08.262401 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:57:08.262610 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:57:08.269542 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:57:08.275138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:57:08.275360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:57:08.290163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:57:08.294125 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:57:08.295170 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:57:08.297824 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:57:08.298452 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:57:08.314572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:57:08.317115 ignition[1046]: INFO : Ignition 2.21.0 Jul 15 23:57:08.317115 ignition[1046]: INFO : Stage: umount Jul 15 23:57:08.317115 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:57:08.317115 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:57:08.316968 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:57:08.338002 ignition[1046]: INFO : umount: umount passed Jul 15 23:57:08.338002 ignition[1046]: INFO : Ignition finished successfully Jul 15 23:57:08.321181 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:57:08.321330 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:57:08.336174 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:57:08.340325 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:57:08.340414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:57:08.342231 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:57:08.342407 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:57:08.349191 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:57:08.349264 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:57:08.353246 systemd[1]: Stopped target network.target - Network. Jul 15 23:57:08.357246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:57:08.357306 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:57:08.361288 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:57:08.365192 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:57:08.366960 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:57:08.369220 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:57:08.373425 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:57:08.377281 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:57:08.377451 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:57:08.381253 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:57:08.381408 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:57:08.385224 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:57:08.385402 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:57:08.389439 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:57:08.389531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:57:08.393677 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:57:08.402108 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:57:08.407453 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:57:08.407608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:57:08.415707 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:57:08.416026 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:57:08.416160 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:57:08.423679 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:57:08.424065 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:57:08.424183 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:57:08.431560 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:57:08.436120 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:57:08.436211 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:57:08.442201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:57:08.442279 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:57:08.447273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:57:08.455091 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:57:08.455331 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:57:08.464110 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:57:08.464201 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:57:08.468242 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:57:08.468330 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:57:08.472217 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:57:08.472301 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:57:08.479489 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:57:08.487746 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:57:08.487855 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:57:08.495528 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:57:08.495792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:57:08.508727 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:57:08.508890 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:57:08.516444 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:57:08.516554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:57:08.522186 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:57:08.522237 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:57:08.525331 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:57:08.525421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:57:08.534995 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:57:08.535086 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:57:08.539149 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:57:08.539224 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:57:08.546678 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:57:08.561001 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:57:08.561101 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:57:08.566162 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:57:08.566235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:57:08.569466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:57:08.569683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:08.580425 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:57:08.580490 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:57:08.653034 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 23:57:08.580536 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:57:08.581411 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:57:08.581527 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:57:08.585346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:57:08.589549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:57:08.614080 systemd[1]: Switching root. Jul 15 23:57:08.671010 systemd-journald[207]: Journal stopped Jul 15 23:57:10.695525 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:57:10.695585 kernel: SELinux: policy capability open_perms=1 Jul 15 23:57:10.695609 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:57:10.695629 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:57:10.695649 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:57:10.695668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:57:10.695695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:57:10.695715 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:57:10.695736 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:57:10.695756 kernel: audit: type=1403 audit(1752623829.219:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:57:10.695781 systemd[1]: Successfully loaded SELinux policy in 51.773ms. Jul 15 23:57:10.695806 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.861ms. Jul 15 23:57:10.695831 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:57:10.695859 systemd[1]: Detected virtualization google. Jul 15 23:57:10.696009 systemd[1]: Detected architecture x86-64. Jul 15 23:57:10.696037 systemd[1]: Detected first boot. Jul 15 23:57:10.696059 systemd[1]: Initializing machine ID from random generator. Jul 15 23:57:10.696080 zram_generator::config[1090]: No configuration found. Jul 15 23:57:10.696120 kernel: Guest personality initialized and is inactive Jul 15 23:57:10.696142 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:57:10.696162 kernel: Initialized host personality Jul 15 23:57:10.696183 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:57:10.696205 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:57:10.696229 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:57:10.696250 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:57:10.696276 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:57:10.696297 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:57:10.696319 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:57:10.696352 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:57:10.696377 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:57:10.696401 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:57:10.696425 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:57:10.696451 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:57:10.696474 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:57:10.696495 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:57:10.696516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:57:10.696547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:57:10.696572 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:57:10.696596 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:57:10.696620 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:57:10.696651 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:57:10.696677 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:57:10.696700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:57:10.696722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:57:10.696744 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:57:10.696766 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:57:10.696799 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:57:10.696823 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:57:10.696852 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:57:10.696894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:57:10.696918 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:57:10.696948 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:57:10.696970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:57:10.696993 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:57:10.697024 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:57:10.697055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:57:10.697080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:57:10.697105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:57:10.697128 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:57:10.697151 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:57:10.697173 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:57:10.697199 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:57:10.697222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:10.697255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:57:10.697281 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:57:10.697305 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:57:10.697329 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:57:10.697352 systemd[1]: Reached target machines.target - Containers. Jul 15 23:57:10.697375 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:57:10.697402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:57:10.697449 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:57:10.697477 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:57:10.697502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:57:10.697527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:57:10.697552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:57:10.697577 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:57:10.697601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:57:10.697627 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:57:10.697656 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:57:10.697681 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:57:10.697704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:57:10.697729 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:57:10.697756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:57:10.697780 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:57:10.697805 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:57:10.697831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:57:10.697862 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:57:10.697911 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:57:10.697945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:57:10.697969 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:57:10.697994 systemd[1]: Stopped verity-setup.service. Jul 15 23:57:10.698020 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:10.698044 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:57:10.698069 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:57:10.698100 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:57:10.698125 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:57:10.698150 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:57:10.698174 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:57:10.698199 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:57:10.698224 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:57:10.698248 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:57:10.698273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:57:10.698298 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:57:10.698327 kernel: fuse: init (API version 7.41) Jul 15 23:57:10.698350 kernel: ACPI: bus type drm_connector registered Jul 15 23:57:10.698373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:57:10.698399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:57:10.698422 kernel: loop: module loaded Jul 15 23:57:10.698445 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:57:10.698469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:57:10.698494 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:57:10.698523 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:57:10.698548 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:57:10.698573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:57:10.698597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:57:10.698622 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:57:10.698647 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:57:10.698672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:57:10.698712 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:57:10.698737 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:57:10.698805 systemd-journald[1161]: Collecting audit messages is disabled. Jul 15 23:57:10.698861 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:57:10.699948 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:57:10.699980 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:57:10.700004 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:57:10.700030 systemd-journald[1161]: Journal started Jul 15 23:57:10.700073 systemd-journald[1161]: Runtime Journal (/run/log/journal/e2c2a6f26e294796a3b4fbd274ec7cfd) is 8M, max 148.9M, 140.9M free. Jul 15 23:57:10.112798 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:57:10.138632 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 23:57:10.139239 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:57:10.706355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:57:10.711684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:57:10.716977 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:57:10.722689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:57:10.730322 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:57:10.735022 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:57:10.744904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:57:10.755793 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:57:10.762005 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:57:10.769695 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:57:10.775750 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:57:10.780286 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:57:10.784968 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:57:10.836754 kernel: loop0: detected capacity change from 0 to 52072 Jul 15 23:57:10.839796 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:57:10.847706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:57:10.856098 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:57:10.861060 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:57:10.864582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:57:10.901411 systemd-journald[1161]: Time spent on flushing to /var/log/journal/e2c2a6f26e294796a3b4fbd274ec7cfd is 60.958ms for 958 entries. Jul 15 23:57:10.901411 systemd-journald[1161]: System Journal (/var/log/journal/e2c2a6f26e294796a3b4fbd274ec7cfd) is 8M, max 584.8M, 576.8M free. Jul 15 23:57:10.987921 systemd-journald[1161]: Received client request to flush runtime journal. Jul 15 23:57:10.988019 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:57:10.988064 kernel: loop1: detected capacity change from 0 to 221472 Jul 15 23:57:10.908682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:57:10.938398 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:57:10.991492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:57:11.020363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:57:11.025398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:57:11.082968 kernel: loop2: detected capacity change from 0 to 146240 Jul 15 23:57:11.087737 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jul 15 23:57:11.087821 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jul 15 23:57:11.103910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:57:11.148863 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:57:11.178906 kernel: loop3: detected capacity change from 0 to 113872 Jul 15 23:57:11.254902 kernel: loop4: detected capacity change from 0 to 52072 Jul 15 23:57:11.286921 kernel: loop5: detected capacity change from 0 to 221472 Jul 15 23:57:11.324920 kernel: loop6: detected capacity change from 0 to 146240 Jul 15 23:57:11.373909 kernel: loop7: detected capacity change from 0 to 113872 Jul 15 23:57:11.419602 (sd-merge)[1236]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jul 15 23:57:11.420598 (sd-merge)[1236]: Merged extensions into '/usr'. Jul 15 23:57:11.430169 systemd[1]: Reload requested from client PID 1193 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:57:11.430365 systemd[1]: Reloading... Jul 15 23:57:11.586148 zram_generator::config[1258]: No configuration found. Jul 15 23:57:11.856427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:11.896903 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:57:12.065542 systemd[1]: Reloading finished in 634 ms. Jul 15 23:57:12.081574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:57:12.085774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:57:12.104096 systemd[1]: Starting ensure-sysext.service... Jul 15 23:57:12.110057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:57:12.143100 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:57:12.143129 systemd[1]: Reloading... Jul 15 23:57:12.181091 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:57:12.183501 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:57:12.185926 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:57:12.186610 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:57:12.190636 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:57:12.191765 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 15 23:57:12.192031 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Jul 15 23:57:12.201411 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:57:12.201435 systemd-tmpfiles[1303]: Skipping /boot Jul 15 23:57:12.259953 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:57:12.260151 systemd-tmpfiles[1303]: Skipping /boot Jul 15 23:57:12.307986 zram_generator::config[1330]: No configuration found. Jul 15 23:57:12.437302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:12.550384 systemd[1]: Reloading finished in 406 ms. Jul 15 23:57:12.573759 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:57:12.592340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:57:12.604418 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:57:12.610182 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:57:12.615089 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:57:12.625572 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:57:12.632455 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:57:12.636908 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:57:12.643956 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:12.644320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:57:12.650278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:57:12.655429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:57:12.661398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:57:12.664175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:57:12.664432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:57:12.664741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:12.670500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:12.671205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:57:12.671479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:57:12.671642 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:57:12.671803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:12.684459 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:57:12.705151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:57:12.705450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:57:12.709792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:57:12.711241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:57:12.721461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:12.723359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:57:12.730318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:57:12.735193 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:57:12.738573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:57:12.738791 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:57:12.739048 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:57:12.739310 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:57:12.742152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:57:12.746982 systemd[1]: Finished ensure-sysext.service. Jul 15 23:57:12.751607 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:57:12.752029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:57:12.765982 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:57:12.777817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:57:12.784499 systemd-udevd[1377]: Using default interface naming scheme 'v255'. Jul 15 23:57:12.798034 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:57:12.799222 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:57:12.835681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:57:12.842621 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:57:12.846429 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:57:12.855371 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jul 15 23:57:12.888788 augenrules[1415]: No rules Jul 15 23:57:12.892434 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:57:12.895252 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:57:12.899528 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:57:12.899934 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:57:12.913751 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:57:12.918131 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:57:12.937939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:57:12.947622 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:57:12.961579 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jul 15 23:57:13.100992 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jul 15 23:57:13.102035 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 15 23:57:13.160661 systemd-resolved[1375]: Positive Trust Anchors: Jul 15 23:57:13.161365 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:57:13.161612 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:57:13.161694 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:57:13.177024 systemd-resolved[1375]: Defaulting to hostname 'linux'. Jul 15 23:57:13.180846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:57:13.185688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:57:13.191022 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:57:13.194309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:57:13.198928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:57:13.202009 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:57:13.213291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:57:13.222292 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:57:13.234079 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:57:13.244090 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:57:13.244386 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:57:13.252002 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:57:13.263356 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:57:13.277708 systemd-networkd[1440]: lo: Link UP Jul 15 23:57:13.278367 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:57:13.280007 systemd-networkd[1440]: lo: Gained carrier Jul 15 23:57:13.288446 systemd-networkd[1440]: Enumeration completed Jul 15 23:57:13.292475 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:57:13.292932 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:57:13.294588 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:57:13.295089 systemd-networkd[1440]: eth0: Link UP Jul 15 23:57:13.295325 systemd-networkd[1440]: eth0: Gained carrier Jul 15 23:57:13.295352 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:57:13.305235 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:57:13.306951 systemd-networkd[1440]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:57:13.306976 systemd-networkd[1440]: eth0: DHCPv4 address 10.128.0.36/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:57:13.315016 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:57:13.333003 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:57:13.342740 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:57:13.354118 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:57:13.363498 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:57:13.383013 systemd[1]: Reached target network.target - Network. Jul 15 23:57:13.390003 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:57:13.399013 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:57:13.408082 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:57:13.408282 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:57:13.411226 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:57:13.430895 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:57:13.423626 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:57:13.445980 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:57:13.455132 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:57:13.470205 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:57:13.486711 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:57:13.498975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:57:13.502731 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:57:13.515330 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:57:13.528840 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:57:13.544903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 23:57:13.545542 jq[1483]: false Jul 15 23:57:13.564901 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:57:13.564990 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 15 23:57:13.565026 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 23:57:13.553076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:57:13.571976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:57:13.614818 extend-filesystems[1484]: Found /dev/sda6 Jul 15 23:57:13.625657 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:57:13.637982 extend-filesystems[1484]: Found /dev/sda9 Jul 15 23:57:13.655049 extend-filesystems[1484]: Checking size of /dev/sda9 Jul 15 23:57:13.655683 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:57:13.668625 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Refreshing passwd entry cache Jul 15 23:57:13.672949 oslogin_cache_refresh[1485]: Refreshing passwd entry cache Jul 15 23:57:13.685009 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:57:13.693361 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Failure getting users, quitting Jul 15 23:57:13.693361 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:57:13.693361 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Refreshing group entry cache Jul 15 23:57:13.692253 oslogin_cache_refresh[1485]: Failure getting users, quitting Jul 15 23:57:13.692282 oslogin_cache_refresh[1485]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:57:13.692350 oslogin_cache_refresh[1485]: Refreshing group entry cache Jul 15 23:57:13.707646 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 23:57:13.705056 oslogin_cache_refresh[1485]: Failure getting groups, quitting Jul 15 23:57:13.708151 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Failure getting groups, quitting Jul 15 23:57:13.708151 google_oslogin_nss_cache[1485]: oslogin_cache_refresh[1485]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:57:13.705079 oslogin_cache_refresh[1485]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:57:13.721444 extend-filesystems[1484]: Resized partition /dev/sda9 Jul 15 23:57:13.719194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:57:13.739662 extend-filesystems[1513]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:57:13.730469 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 15 23:57:13.733213 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:57:13.743462 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:57:13.757205 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 15 23:57:13.771769 coreos-metadata[1480]: Jul 15 23:57:13.770 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 15 23:57:13.771769 coreos-metadata[1480]: Jul 15 23:57:13.771 INFO Fetch successful Jul 15 23:57:13.771769 coreos-metadata[1480]: Jul 15 23:57:13.771 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 15 23:57:13.771769 coreos-metadata[1480]: Jul 15 23:57:13.771 INFO Fetch successful Jul 15 23:57:13.772274 coreos-metadata[1480]: Jul 15 23:57:13.771 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 15 23:57:13.775939 coreos-metadata[1480]: Jul 15 23:57:13.772 INFO Fetch successful Jul 15 23:57:13.775939 coreos-metadata[1480]: Jul 15 23:57:13.772 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 15 23:57:13.775939 coreos-metadata[1480]: Jul 15 23:57:13.773 INFO Fetch successful Jul 15 23:57:13.773784 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:57:13.799324 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 15 23:57:13.809436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:57:13.840737 ntpd[1489]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:57:13.821031 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: ---------------------------------------------------- Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: corporation. Support and training for ntp-4 are Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: available at https://www.nwtime.org/support Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: ---------------------------------------------------- Jul 15 23:57:13.859341 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: proto: precision = 0.110 usec (-23) Jul 15 23:57:13.840769 ntpd[1489]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:57:13.821364 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:57:13.840784 ntpd[1489]: ---------------------------------------------------- Jul 15 23:57:13.821866 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:57:13.840798 ntpd[1489]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:57:13.871219 extend-filesystems[1513]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 23:57:13.871219 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 15 23:57:13.871219 extend-filesystems[1513]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 15 23:57:13.915344 update_engine[1514]: I20250715 23:57:13.861923 1514 main.cc:92] Flatcar Update Engine starting Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: basedate set to 2025-07-03 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: gps base set to 2025-07-06 (week 2374) Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Listen normally on 3 eth0 10.128.0.36:123 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Listen normally on 4 lo [::1]:123 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: bind(21) AF_INET6 fe80::4001:aff:fe80:24%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:24%2#123 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: failed to init interface for address fe80::4001:aff:fe80:24%2 Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: Listening on routing socket on fd #21 for interface updates Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:57:13.915704 ntpd[1489]: 15 Jul 23:57:13 ntpd[1489]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:57:13.822230 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:57:13.840813 ntpd[1489]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:57:13.916649 extend-filesystems[1484]: Resized filesystem in /dev/sda9 Jul 15 23:57:13.924140 jq[1519]: true Jul 15 23:57:13.831510 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:57:13.840827 ntpd[1489]: corporation. Support and training for ntp-4 are Jul 15 23:57:13.831838 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:57:13.840840 ntpd[1489]: available at https://www.nwtime.org/support Jul 15 23:57:13.849799 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:57:13.840854 ntpd[1489]: ---------------------------------------------------- Jul 15 23:57:13.855536 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:57:13.854638 ntpd[1489]: proto: precision = 0.110 usec (-23) Jul 15 23:57:13.866707 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:57:13.860796 ntpd[1489]: basedate set to 2025-07-03 Jul 15 23:57:13.868040 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:57:13.860820 ntpd[1489]: gps base set to 2025-07-06 (week 2374) Jul 15 23:57:13.903565 systemd-logind[1507]: New seat seat0. Jul 15 23:57:13.879138 ntpd[1489]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:57:13.905738 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:57:13.879202 ntpd[1489]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:57:13.880253 ntpd[1489]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:57:13.880313 ntpd[1489]: Listen normally on 3 eth0 10.128.0.36:123 Jul 15 23:57:13.880375 ntpd[1489]: Listen normally on 4 lo [::1]:123 Jul 15 23:57:13.880441 ntpd[1489]: bind(21) AF_INET6 fe80::4001:aff:fe80:24%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:57:13.880471 ntpd[1489]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:24%2#123 Jul 15 23:57:13.930481 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:57:13.880493 ntpd[1489]: failed to init interface for address fe80::4001:aff:fe80:24%2 Jul 15 23:57:13.880540 ntpd[1489]: Listening on routing socket on fd #21 for interface updates Jul 15 23:57:13.906430 ntpd[1489]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:57:13.906476 ntpd[1489]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:57:13.987141 jq[1543]: true Jul 15 23:57:13.990620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:57:14.057204 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:57:14.068862 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:57:14.083238 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:57:14.141970 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:57:14.184940 tar[1529]: linux-amd64/helm Jul 15 23:57:14.257540 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:57:14.282786 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:57:14.285737 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:57:14.314364 systemd[1]: Starting sshkeys.service... Jul 15 23:57:14.341703 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:57:14.356237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:57:14.391124 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:57:14.410041 kernel: EDAC MC: Ver: 3.0.0 Jul 15 23:57:14.412000 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:57:14.461538 systemd-logind[1507]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:57:14.492038 dbus-daemon[1481]: [system] SELinux support is enabled Jul 15 23:57:14.493156 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:57:14.501980 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:57:14.502331 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:57:14.502890 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:57:14.503119 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:57:14.511504 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:57:14.512591 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1440 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:57:14.530014 update_engine[1514]: I20250715 23:57:14.523695 1514 update_check_scheduler.cc:74] Next update check in 2m33s Jul 15 23:57:14.524583 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:57:14.525077 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:57:14.558290 coreos-metadata[1580]: Jul 15 23:57:14.557 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 15 23:57:14.559745 coreos-metadata[1580]: Jul 15 23:57:14.558 INFO Fetch failed with 404: resource not found Jul 15 23:57:14.559745 coreos-metadata[1580]: Jul 15 23:57:14.558 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 15 23:57:14.559745 coreos-metadata[1580]: Jul 15 23:57:14.559 INFO Fetch successful Jul 15 23:57:14.559745 coreos-metadata[1580]: Jul 15 23:57:14.559 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 15 23:57:14.559745 coreos-metadata[1580]: Jul 15 23:57:14.559 INFO Fetch failed with 404: resource not found Jul 15 23:57:14.559745 coreos-metadata[1580]: Jul 15 23:57:14.559 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 15 23:57:14.558689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:57:14.560426 coreos-metadata[1580]: Jul 15 23:57:14.560 INFO Fetch failed with 404: resource not found Jul 15 23:57:14.560426 coreos-metadata[1580]: Jul 15 23:57:14.560 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 15 23:57:14.563102 coreos-metadata[1580]: Jul 15 23:57:14.560 INFO Fetch successful Jul 15 23:57:14.571604 unknown[1580]: wrote ssh authorized keys file for user: core Jul 15 23:57:14.605034 systemd-logind[1507]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 23:57:14.653940 update-ssh-keys[1594]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:57:14.652222 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:57:14.672497 systemd[1]: Finished sshkeys.service. Jul 15 23:57:14.766038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:57:14.852507 ntpd[1489]: bind(24) AF_INET6 fe80::4001:aff:fe80:24%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:57:14.854092 ntpd[1489]: 15 Jul 23:57:14 ntpd[1489]: bind(24) AF_INET6 fe80::4001:aff:fe80:24%2#123 flags 0x11 failed: Cannot assign requested address Jul 15 23:57:14.854092 ntpd[1489]: 15 Jul 23:57:14 ntpd[1489]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:24%2#123 Jul 15 23:57:14.854092 ntpd[1489]: 15 Jul 23:57:14 ntpd[1489]: failed to init interface for address fe80::4001:aff:fe80:24%2 Jul 15 23:57:14.852555 ntpd[1489]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:24%2#123 Jul 15 23:57:14.852575 ntpd[1489]: failed to init interface for address fe80::4001:aff:fe80:24%2 Jul 15 23:57:15.072963 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:57:15.149626 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:57:15.154454 systemd-networkd[1440]: eth0: Gained IPv6LL Jul 15 23:57:15.158907 containerd[1533]: time="2025-07-15T23:57:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:57:15.161385 containerd[1533]: time="2025-07-15T23:57:15.160660180Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:57:15.168283 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:57:15.180788 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:57:15.194077 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:57:15.207262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:15.213745 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:57:15.220957 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1591 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:57:15.221330 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:57:15.229049 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:57:15.233461 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jul 15 23:57:15.243166 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:57:15.246406 containerd[1533]: time="2025-07-15T23:57:15.246358854Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.85µs" Jul 15 23:57:15.246898 containerd[1533]: time="2025-07-15T23:57:15.246506907Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:57:15.246898 containerd[1533]: time="2025-07-15T23:57:15.246539616Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:57:15.246898 containerd[1533]: time="2025-07-15T23:57:15.246735765Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:57:15.246898 containerd[1533]: time="2025-07-15T23:57:15.246760628Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:57:15.246898 containerd[1533]: time="2025-07-15T23:57:15.246798003Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:57:15.247186 containerd[1533]: time="2025-07-15T23:57:15.247158247Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:57:15.247261 containerd[1533]: time="2025-07-15T23:57:15.247244655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:57:15.247811 containerd[1533]: time="2025-07-15T23:57:15.247778803Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:57:15.247946 containerd[1533]: time="2025-07-15T23:57:15.247925962Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:57:15.248040 containerd[1533]: time="2025-07-15T23:57:15.248020149Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:57:15.248640 containerd[1533]: time="2025-07-15T23:57:15.248110892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:57:15.248640 containerd[1533]: time="2025-07-15T23:57:15.248243783Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:57:15.248640 containerd[1533]: time="2025-07-15T23:57:15.248518480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:57:15.248640 containerd[1533]: time="2025-07-15T23:57:15.248576208Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:57:15.248640 containerd[1533]: time="2025-07-15T23:57:15.248595625Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:57:15.249147 containerd[1533]: time="2025-07-15T23:57:15.249079812Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:57:15.249720 containerd[1533]: time="2025-07-15T23:57:15.249666229Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:57:15.249953 containerd[1533]: time="2025-07-15T23:57:15.249929749Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:57:15.253174 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:57:15.254121 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:57:15.256703 containerd[1533]: time="2025-07-15T23:57:15.256664973Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:57:15.256792 containerd[1533]: time="2025-07-15T23:57:15.256747028Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:57:15.256848 containerd[1533]: time="2025-07-15T23:57:15.256773773Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:57:15.256931 containerd[1533]: time="2025-07-15T23:57:15.256846246Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:57:15.256931 containerd[1533]: time="2025-07-15T23:57:15.256868821Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:57:15.256931 containerd[1533]: time="2025-07-15T23:57:15.256915545Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.256937892Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.256958409Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.256977933Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.256995038Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.257011233Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.257031913Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:57:15.257196 containerd[1533]: time="2025-07-15T23:57:15.257191430Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257221253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257246416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257266893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257287243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257315943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257340516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257358491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257378856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257397541Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:57:15.257466 containerd[1533]: time="2025-07-15T23:57:15.257414779Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:57:15.257860 containerd[1533]: time="2025-07-15T23:57:15.257515888Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:57:15.257860 containerd[1533]: time="2025-07-15T23:57:15.257537234Z" level=info msg="Start snapshots syncer" Jul 15 23:57:15.257860 containerd[1533]: time="2025-07-15T23:57:15.257598265Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:57:15.259059 containerd[1533]: time="2025-07-15T23:57:15.258115966Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:57:15.259059 containerd[1533]: time="2025-07-15T23:57:15.258198275Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258460085Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258688756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258725195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258744020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258761737Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258783921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258801710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258819410Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258850717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258866231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258901368Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258950291Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258974312Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:57:15.259281 containerd[1533]: time="2025-07-15T23:57:15.258989670Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259006261Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259020332Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259038402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259055337Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259079313Z" level=info msg="runtime interface created" Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259089174Z" level=info msg="created NRI interface" Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259102692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259120683Z" level=info msg="Connect containerd service" Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.259162798Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:57:15.261638 containerd[1533]: time="2025-07-15T23:57:15.260277617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:57:15.293849 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:57:15.295745 init.sh[1623]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 15 23:57:15.297030 init.sh[1623]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 15 23:57:15.299257 init.sh[1623]: + /usr/bin/google_instance_setup Jul 15 23:57:15.305937 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:57:15.415870 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:57:15.425432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:57:15.442776 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:57:15.456384 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:57:15.467266 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:57:15.683494 polkitd[1629]: Started polkitd version 126 Jul 15 23:57:15.709142 polkitd[1629]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:57:15.709766 polkitd[1629]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:57:15.709838 polkitd[1629]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:57:15.711846 polkitd[1629]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:57:15.714474 polkitd[1629]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:57:15.714552 polkitd[1629]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:57:15.716314 polkitd[1629]: Finished loading, compiling and executing 2 rules Jul 15 23:57:15.716755 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:57:15.717745 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:57:15.722030 polkitd[1629]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:57:15.762888 containerd[1533]: time="2025-07-15T23:57:15.762604593Z" level=info msg="Start subscribing containerd event" Jul 15 23:57:15.762888 containerd[1533]: time="2025-07-15T23:57:15.762678701Z" level=info msg="Start recovering state" Jul 15 23:57:15.762888 containerd[1533]: time="2025-07-15T23:57:15.762833266Z" level=info msg="Start event monitor" Jul 15 23:57:15.762888 containerd[1533]: time="2025-07-15T23:57:15.762850780Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:57:15.762888 containerd[1533]: time="2025-07-15T23:57:15.762864119Z" level=info msg="Start streaming server" Jul 15 23:57:15.763164 containerd[1533]: time="2025-07-15T23:57:15.762911969Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:57:15.763164 containerd[1533]: time="2025-07-15T23:57:15.762923971Z" level=info msg="runtime interface starting up..." Jul 15 23:57:15.763164 containerd[1533]: time="2025-07-15T23:57:15.762932327Z" level=info msg="starting plugins..." Jul 15 23:57:15.763164 containerd[1533]: time="2025-07-15T23:57:15.762952924Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:57:15.765291 containerd[1533]: time="2025-07-15T23:57:15.764850396Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:57:15.767961 containerd[1533]: time="2025-07-15T23:57:15.767927540Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:57:15.768941 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:57:15.769213 containerd[1533]: time="2025-07-15T23:57:15.769096469Z" level=info msg="containerd successfully booted in 0.611956s" Jul 15 23:57:15.786996 systemd-hostnamed[1591]: Hostname set to (transient) Jul 15 23:57:15.788896 systemd-resolved[1375]: System hostname changed to 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce'. Jul 15 23:57:16.083924 tar[1529]: linux-amd64/LICENSE Jul 15 23:57:16.083924 tar[1529]: linux-amd64/README.md Jul 15 23:57:16.108513 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:57:16.229741 instance-setup[1631]: INFO Running google_set_multiqueue. Jul 15 23:57:16.247821 instance-setup[1631]: INFO Set channels for eth0 to 2. Jul 15 23:57:16.252120 instance-setup[1631]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jul 15 23:57:16.253863 instance-setup[1631]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jul 15 23:57:16.254211 instance-setup[1631]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jul 15 23:57:16.255988 instance-setup[1631]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jul 15 23:57:16.256718 instance-setup[1631]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jul 15 23:57:16.258554 instance-setup[1631]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jul 15 23:57:16.259292 instance-setup[1631]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jul 15 23:57:16.263556 instance-setup[1631]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jul 15 23:57:16.272818 instance-setup[1631]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:57:16.277094 instance-setup[1631]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:57:16.279401 instance-setup[1631]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 15 23:57:16.279466 instance-setup[1631]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 15 23:57:16.301608 init.sh[1623]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 15 23:57:16.435435 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:57:16.450213 systemd[1]: Started sshd@0-10.128.0.36:22-139.178.89.65:41932.service - OpenSSH per-connection server daemon (139.178.89.65:41932). Jul 15 23:57:16.481401 startup-script[1694]: INFO Starting startup scripts. Jul 15 23:57:16.488525 startup-script[1694]: INFO No startup scripts found in metadata. Jul 15 23:57:16.488757 startup-script[1694]: INFO Finished running startup scripts. Jul 15 23:57:16.514919 init.sh[1623]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 15 23:57:16.514919 init.sh[1623]: + daemon_pids=() Jul 15 23:57:16.514919 init.sh[1623]: + for d in accounts clock_skew network Jul 15 23:57:16.514919 init.sh[1623]: + daemon_pids+=($!) Jul 15 23:57:16.519402 init.sh[1623]: + for d in accounts clock_skew network Jul 15 23:57:16.519402 init.sh[1623]: + daemon_pids+=($!) Jul 15 23:57:16.519502 init.sh[1700]: + /usr/bin/google_clock_skew_daemon Jul 15 23:57:16.519834 init.sh[1623]: + for d in accounts clock_skew network Jul 15 23:57:16.519834 init.sh[1623]: + daemon_pids+=($!) Jul 15 23:57:16.519834 init.sh[1623]: + NOTIFY_SOCKET=/run/systemd/notify Jul 15 23:57:16.519834 init.sh[1623]: + /usr/bin/systemd-notify --ready Jul 15 23:57:16.521000 init.sh[1701]: + /usr/bin/google_network_daemon Jul 15 23:57:16.521262 init.sh[1699]: + /usr/bin/google_accounts_daemon Jul 15 23:57:16.560839 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jul 15 23:57:16.570700 init.sh[1623]: + wait -n 1699 1700 1701 Jul 15 23:57:16.873512 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 41932 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:16.884718 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:16.906492 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:57:16.920234 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:57:16.956266 systemd-logind[1507]: New session 1 of user core. Jul 15 23:57:16.970406 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:57:16.992651 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:57:16.999065 google-clock-skew[1700]: INFO Starting Google Clock Skew daemon. Jul 15 23:57:17.013925 google-clock-skew[1700]: INFO Clock drift token has changed: 0. Jul 15 23:57:17.015614 google-networking[1701]: INFO Starting Google Networking daemon. Jul 15 23:57:17.041791 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:57:17.047700 systemd-logind[1507]: New session c1 of user core. Jul 15 23:57:17.065390 groupadd[1713]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 15 23:57:17.071455 groupadd[1713]: group added to /etc/gshadow: name=google-sudoers Jul 15 23:57:17.150563 groupadd[1713]: new group: name=google-sudoers, GID=1000 Jul 15 23:57:17.191134 google-accounts[1699]: INFO Starting Google Accounts daemon. Jul 15 23:57:17.216197 google-accounts[1699]: WARNING OS Login not installed. Jul 15 23:57:17.219275 google-accounts[1699]: INFO Creating a new user account for 0. Jul 15 23:57:17.230045 init.sh[1728]: useradd: invalid user name '0': use --badname to ignore Jul 15 23:57:17.231250 google-accounts[1699]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 15 23:57:17.000758 systemd-resolved[1375]: Clock change detected. Flushing caches. Jul 15 23:57:17.023130 systemd-journald[1161]: Time jumped backwards, rotating. Jul 15 23:57:17.008075 google-clock-skew[1700]: INFO Synced system time with hardware clock. Jul 15 23:57:17.063917 systemd[1712]: Queued start job for default target default.target. Jul 15 23:57:17.070778 systemd[1712]: Created slice app.slice - User Application Slice. Jul 15 23:57:17.070969 systemd[1712]: Reached target paths.target - Paths. Jul 15 23:57:17.071054 systemd[1712]: Reached target timers.target - Timers. Jul 15 23:57:17.073329 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:57:17.107436 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:57:17.107620 systemd[1712]: Reached target sockets.target - Sockets. Jul 15 23:57:17.107697 systemd[1712]: Reached target basic.target - Basic System. Jul 15 23:57:17.107776 systemd[1712]: Reached target default.target - Main User Target. Jul 15 23:57:17.107833 systemd[1712]: Startup finished in 330ms. Jul 15 23:57:17.107855 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:57:17.131861 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:57:17.145057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:17.159001 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:57:17.161498 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:57:17.168357 systemd[1]: Startup finished in 4.315s (kernel) + 7.397s (initrd) + 8.280s (userspace) = 19.993s. Jul 15 23:57:17.393232 systemd[1]: Started sshd@1-10.128.0.36:22-139.178.89.65:41934.service - OpenSSH per-connection server daemon (139.178.89.65:41934). Jul 15 23:57:17.565487 ntpd[1489]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:24%2]:123 Jul 15 23:57:17.566031 ntpd[1489]: 15 Jul 23:57:17 ntpd[1489]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:24%2]:123 Jul 15 23:57:17.705052 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 41934 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:17.706373 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:17.714226 systemd-logind[1507]: New session 2 of user core. Jul 15 23:57:17.722059 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:57:17.918732 sshd[1752]: Connection closed by 139.178.89.65 port 41934 Jul 15 23:57:17.919525 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:17.926517 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:57:17.927638 systemd[1]: sshd@1-10.128.0.36:22-139.178.89.65:41934.service: Deactivated successfully. Jul 15 23:57:17.930387 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:57:17.932735 systemd-logind[1507]: Removed session 2. Jul 15 23:57:17.974845 systemd[1]: Started sshd@2-10.128.0.36:22-139.178.89.65:41948.service - OpenSSH per-connection server daemon (139.178.89.65:41948). Jul 15 23:57:18.048057 kubelet[1737]: E0715 23:57:18.047979 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:57:18.051252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:57:18.051610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:57:18.052391 systemd[1]: kubelet.service: Consumed 1.305s CPU time, 264.1M memory peak. Jul 15 23:57:18.289661 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 41948 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:18.291205 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:18.299596 systemd-logind[1507]: New session 3 of user core. Jul 15 23:57:18.306236 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:57:18.498586 sshd[1762]: Connection closed by 139.178.89.65 port 41948 Jul 15 23:57:18.499586 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:18.505083 systemd[1]: sshd@2-10.128.0.36:22-139.178.89.65:41948.service: Deactivated successfully. Jul 15 23:57:18.507802 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:57:18.510983 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:57:18.512567 systemd-logind[1507]: Removed session 3. Jul 15 23:57:18.563394 systemd[1]: Started sshd@3-10.128.0.36:22-139.178.89.65:41960.service - OpenSSH per-connection server daemon (139.178.89.65:41960). Jul 15 23:57:18.870567 sshd[1768]: Accepted publickey for core from 139.178.89.65 port 41960 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:18.872208 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:18.879729 systemd-logind[1507]: New session 4 of user core. Jul 15 23:57:18.886171 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:57:19.081092 sshd[1770]: Connection closed by 139.178.89.65 port 41960 Jul 15 23:57:19.082096 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:19.087955 systemd[1]: sshd@3-10.128.0.36:22-139.178.89.65:41960.service: Deactivated successfully. Jul 15 23:57:19.090233 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:57:19.091504 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:57:19.093573 systemd-logind[1507]: Removed session 4. Jul 15 23:57:19.139730 systemd[1]: Started sshd@4-10.128.0.36:22-139.178.89.65:44498.service - OpenSSH per-connection server daemon (139.178.89.65:44498). Jul 15 23:57:19.445584 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 44498 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:19.447231 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:19.454632 systemd-logind[1507]: New session 5 of user core. Jul 15 23:57:19.460131 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:57:19.639728 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:57:19.640219 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:57:19.654672 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 15 23:57:19.697371 sshd[1778]: Connection closed by 139.178.89.65 port 44498 Jul 15 23:57:19.699190 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:19.704947 systemd[1]: sshd@4-10.128.0.36:22-139.178.89.65:44498.service: Deactivated successfully. Jul 15 23:57:19.707254 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:57:19.708391 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:57:19.710311 systemd-logind[1507]: Removed session 5. Jul 15 23:57:19.761284 systemd[1]: Started sshd@5-10.128.0.36:22-139.178.89.65:44508.service - OpenSSH per-connection server daemon (139.178.89.65:44508). Jul 15 23:57:20.057954 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 44508 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:20.059384 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:20.066675 systemd-logind[1507]: New session 6 of user core. Jul 15 23:57:20.073101 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:57:20.235148 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:57:20.235605 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:57:20.242377 sudo[1789]: pam_unix(sudo:session): session closed for user root Jul 15 23:57:20.257572 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:57:20.258082 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:57:20.270745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:57:20.323836 augenrules[1811]: No rules Jul 15 23:57:20.325860 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:57:20.326205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:57:20.327841 sudo[1788]: pam_unix(sudo:session): session closed for user root Jul 15 23:57:20.370680 sshd[1787]: Connection closed by 139.178.89.65 port 44508 Jul 15 23:57:20.371490 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:20.377846 systemd[1]: sshd@5-10.128.0.36:22-139.178.89.65:44508.service: Deactivated successfully. Jul 15 23:57:20.380259 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:57:20.381446 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:57:20.383400 systemd-logind[1507]: Removed session 6. Jul 15 23:57:20.424955 systemd[1]: Started sshd@6-10.128.0.36:22-139.178.89.65:44510.service - OpenSSH per-connection server daemon (139.178.89.65:44510). Jul 15 23:57:20.727418 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 44510 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:20.729440 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:20.737954 systemd-logind[1507]: New session 7 of user core. Jul 15 23:57:20.747129 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:57:20.907081 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:57:20.907545 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:57:21.392866 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:57:21.415552 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:57:21.759981 dockerd[1840]: time="2025-07-15T23:57:21.759799326Z" level=info msg="Starting up" Jul 15 23:57:21.761851 dockerd[1840]: time="2025-07-15T23:57:21.761794785Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:57:21.947137 dockerd[1840]: time="2025-07-15T23:57:21.946862572Z" level=info msg="Loading containers: start." Jul 15 23:57:21.965936 kernel: Initializing XFRM netlink socket Jul 15 23:57:22.295162 systemd-networkd[1440]: docker0: Link UP Jul 15 23:57:22.301153 dockerd[1840]: time="2025-07-15T23:57:22.301067260Z" level=info msg="Loading containers: done." Jul 15 23:57:22.319720 dockerd[1840]: time="2025-07-15T23:57:22.319539547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:57:22.319720 dockerd[1840]: time="2025-07-15T23:57:22.319640860Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:57:22.320110 dockerd[1840]: time="2025-07-15T23:57:22.319783580Z" level=info msg="Initializing buildkit" Jul 15 23:57:22.351437 dockerd[1840]: time="2025-07-15T23:57:22.351382748Z" level=info msg="Completed buildkit initialization" Jul 15 23:57:22.361307 dockerd[1840]: time="2025-07-15T23:57:22.361201087Z" level=info msg="Daemon has completed initialization" Jul 15 23:57:22.361425 dockerd[1840]: time="2025-07-15T23:57:22.361324702Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:57:22.361672 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:57:23.267685 containerd[1533]: time="2025-07-15T23:57:23.267619723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 15 23:57:23.790155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531872805.mount: Deactivated successfully. Jul 15 23:57:25.443179 containerd[1533]: time="2025-07-15T23:57:25.443104146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:25.444532 containerd[1533]: time="2025-07-15T23:57:25.444482186Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28084387" Jul 15 23:57:25.445853 containerd[1533]: time="2025-07-15T23:57:25.445783888Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:25.448995 containerd[1533]: time="2025-07-15T23:57:25.448919446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:25.450609 containerd[1533]: time="2025-07-15T23:57:25.450189698Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.1825156s" Jul 15 23:57:25.450609 containerd[1533]: time="2025-07-15T23:57:25.450234679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 15 23:57:25.451360 containerd[1533]: time="2025-07-15T23:57:25.451320383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 15 23:57:27.009507 containerd[1533]: time="2025-07-15T23:57:27.009439103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:27.010769 containerd[1533]: time="2025-07-15T23:57:27.010719196Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24715179" Jul 15 23:57:27.012147 containerd[1533]: time="2025-07-15T23:57:27.012073045Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:27.014997 containerd[1533]: time="2025-07-15T23:57:27.014940543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:27.016621 containerd[1533]: time="2025-07-15T23:57:27.016157639Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.564799468s" Jul 15 23:57:27.016621 containerd[1533]: time="2025-07-15T23:57:27.016200859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 15 23:57:27.017091 containerd[1533]: time="2025-07-15T23:57:27.017060053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 15 23:57:28.130959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:57:28.135117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:28.442614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:28.454417 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:57:28.485588 containerd[1533]: time="2025-07-15T23:57:28.485529475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:28.488072 containerd[1533]: time="2025-07-15T23:57:28.488033988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18785616" Jul 15 23:57:28.490058 containerd[1533]: time="2025-07-15T23:57:28.489512783Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:28.494547 containerd[1533]: time="2025-07-15T23:57:28.494513482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:28.496105 containerd[1533]: time="2025-07-15T23:57:28.496066392Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.478964489s" Jul 15 23:57:28.496672 containerd[1533]: time="2025-07-15T23:57:28.496111638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 15 23:57:28.496939 containerd[1533]: time="2025-07-15T23:57:28.496902626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 15 23:57:28.524655 kubelet[2114]: E0715 23:57:28.524595 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:57:28.532118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:57:28.532376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:57:28.532894 systemd[1]: kubelet.service: Consumed 235ms CPU time, 109.4M memory peak. Jul 15 23:57:29.717302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309158946.mount: Deactivated successfully. Jul 15 23:57:30.353062 containerd[1533]: time="2025-07-15T23:57:30.352999499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:30.354343 containerd[1533]: time="2025-07-15T23:57:30.354287764Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30385507" Jul 15 23:57:30.355417 containerd[1533]: time="2025-07-15T23:57:30.355353032Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:30.357685 containerd[1533]: time="2025-07-15T23:57:30.357626753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:30.358690 containerd[1533]: time="2025-07-15T23:57:30.358465368Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.861522477s" Jul 15 23:57:30.358690 containerd[1533]: time="2025-07-15T23:57:30.358518932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 15 23:57:30.359156 containerd[1533]: time="2025-07-15T23:57:30.359127213Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:57:30.808835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857476481.mount: Deactivated successfully. Jul 15 23:57:31.907240 containerd[1533]: time="2025-07-15T23:57:31.907165979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:31.908628 containerd[1533]: time="2025-07-15T23:57:31.908575845Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jul 15 23:57:31.909992 containerd[1533]: time="2025-07-15T23:57:31.909927311Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:31.914260 containerd[1533]: time="2025-07-15T23:57:31.914178961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:31.916073 containerd[1533]: time="2025-07-15T23:57:31.915619119Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.556282453s" Jul 15 23:57:31.916073 containerd[1533]: time="2025-07-15T23:57:31.915664853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 23:57:31.916362 containerd[1533]: time="2025-07-15T23:57:31.916328235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:57:32.375633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110041703.mount: Deactivated successfully. Jul 15 23:57:32.381377 containerd[1533]: time="2025-07-15T23:57:32.381250890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:57:32.382642 containerd[1533]: time="2025-07-15T23:57:32.382599788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jul 15 23:57:32.383932 containerd[1533]: time="2025-07-15T23:57:32.383845120Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:57:32.386740 containerd[1533]: time="2025-07-15T23:57:32.386640724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:57:32.388432 containerd[1533]: time="2025-07-15T23:57:32.387747010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.371334ms" Jul 15 23:57:32.388432 containerd[1533]: time="2025-07-15T23:57:32.387800224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:57:32.389066 containerd[1533]: time="2025-07-15T23:57:32.388978979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 23:57:32.841091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955676287.mount: Deactivated successfully. Jul 15 23:57:35.023488 containerd[1533]: time="2025-07-15T23:57:35.023418255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:35.024932 containerd[1533]: time="2025-07-15T23:57:35.024898446Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786577" Jul 15 23:57:35.026330 containerd[1533]: time="2025-07-15T23:57:35.026269642Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:35.029654 containerd[1533]: time="2025-07-15T23:57:35.029598699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:35.031196 containerd[1533]: time="2025-07-15T23:57:35.031021212Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.641995536s" Jul 15 23:57:35.031196 containerd[1533]: time="2025-07-15T23:57:35.031071548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 23:57:38.630756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:57:38.635128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:39.009778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:39.021661 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:57:39.090064 kubelet[2268]: E0715 23:57:39.090010 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:57:39.094436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:57:39.094800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:57:39.095404 systemd[1]: kubelet.service: Consumed 236ms CPU time, 107.5M memory peak. Jul 15 23:57:39.698419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:39.698709 systemd[1]: kubelet.service: Consumed 236ms CPU time, 107.5M memory peak. Jul 15 23:57:39.701978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:39.740769 systemd[1]: Reload requested from client PID 2282 ('systemctl') (unit session-7.scope)... Jul 15 23:57:39.740797 systemd[1]: Reloading... Jul 15 23:57:39.912954 zram_generator::config[2327]: No configuration found. Jul 15 23:57:40.056438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:40.221320 systemd[1]: Reloading finished in 479 ms. Jul 15 23:57:40.300898 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:57:40.301351 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:57:40.301736 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:40.301807 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.3M memory peak. Jul 15 23:57:40.304917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:40.604891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:40.618523 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:57:40.673914 kubelet[2378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:40.673914 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:57:40.673914 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:40.674466 kubelet[2378]: I0715 23:57:40.673993 2378 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:57:41.378587 kubelet[2378]: I0715 23:57:41.378526 2378 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:57:41.378587 kubelet[2378]: I0715 23:57:41.378566 2378 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:57:41.379007 kubelet[2378]: I0715 23:57:41.378973 2378 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:57:41.428993 kubelet[2378]: E0715 23:57:41.428869 2378 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:57:41.430336 kubelet[2378]: I0715 23:57:41.430058 2378 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:57:41.440113 kubelet[2378]: I0715 23:57:41.440074 2378 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:57:41.445757 kubelet[2378]: I0715 23:57:41.445720 2378 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:57:41.447042 kubelet[2378]: I0715 23:57:41.446999 2378 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:57:41.447314 kubelet[2378]: I0715 23:57:41.447258 2378 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:57:41.447555 kubelet[2378]: I0715 23:57:41.447297 2378 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:57:41.447742 kubelet[2378]: I0715 23:57:41.447567 2378 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:57:41.447742 kubelet[2378]: I0715 23:57:41.447583 2378 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:57:41.447742 kubelet[2378]: I0715 23:57:41.447717 2378 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:41.452625 kubelet[2378]: I0715 23:57:41.452569 2378 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:57:41.452625 kubelet[2378]: I0715 23:57:41.452614 2378 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:57:41.452792 kubelet[2378]: I0715 23:57:41.452666 2378 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:57:41.452792 kubelet[2378]: I0715 23:57:41.452692 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:57:41.459135 kubelet[2378]: W0715 23:57:41.458683 2378 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce&limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Jul 15 23:57:41.459135 kubelet[2378]: E0715 23:57:41.458786 2378 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce&limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:57:41.461273 kubelet[2378]: W0715 23:57:41.461192 2378 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Jul 15 23:57:41.461398 kubelet[2378]: E0715 23:57:41.461270 2378 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:57:41.461398 kubelet[2378]: I0715 23:57:41.461381 2378 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:57:41.461932 kubelet[2378]: I0715 23:57:41.461900 2378 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:57:41.462956 kubelet[2378]: W0715 23:57:41.462928 2378 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:57:41.467359 kubelet[2378]: I0715 23:57:41.467306 2378 server.go:1274] "Started kubelet" Jul 15 23:57:41.471207 kubelet[2378]: I0715 23:57:41.471181 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:57:41.472889 kubelet[2378]: I0715 23:57:41.472834 2378 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:57:41.476071 kubelet[2378]: I0715 23:57:41.474915 2378 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:57:41.483003 kubelet[2378]: I0715 23:57:41.482955 2378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:57:41.485759 kubelet[2378]: I0715 23:57:41.485727 2378 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:57:41.485831 kubelet[2378]: I0715 23:57:41.484167 2378 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:57:41.488733 kubelet[2378]: I0715 23:57:41.483290 2378 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:57:41.488819 kubelet[2378]: I0715 23:57:41.484588 2378 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:57:41.488913 kubelet[2378]: I0715 23:57:41.488868 2378 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:57:41.488913 kubelet[2378]: E0715 23:57:41.484823 2378 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" not found" Jul 15 23:57:41.489709 kubelet[2378]: W0715 23:57:41.489651 2378 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Jul 15 23:57:41.489799 kubelet[2378]: E0715 23:57:41.489726 2378 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:57:41.489854 kubelet[2378]: E0715 23:57:41.489816 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="200ms" Jul 15 23:57:41.493815 kubelet[2378]: I0715 23:57:41.493783 2378 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:57:41.493947 kubelet[2378]: I0715 23:57:41.493896 2378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:57:41.497551 kubelet[2378]: E0715 23:57:41.495494 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce.1852921dbfa154c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,UID:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,},FirstTimestamp:2025-07-15 23:57:41.467272393 +0000 UTC m=+0.843414875,LastTimestamp:2025-07-15 23:57:41.467272393 +0000 UTC m=+0.843414875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,}" Jul 15 23:57:41.498061 kubelet[2378]: I0715 23:57:41.498039 2378 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:57:41.511360 kubelet[2378]: I0715 23:57:41.511173 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:57:41.513747 kubelet[2378]: I0715 23:57:41.513361 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:57:41.513747 kubelet[2378]: I0715 23:57:41.513391 2378 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:57:41.513747 kubelet[2378]: I0715 23:57:41.513418 2378 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:57:41.513747 kubelet[2378]: E0715 23:57:41.513477 2378 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:57:41.523117 kubelet[2378]: W0715 23:57:41.523050 2378 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.36:6443: connect: connection refused Jul 15 23:57:41.523289 kubelet[2378]: E0715 23:57:41.523260 2378 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:57:41.523771 kubelet[2378]: E0715 23:57:41.523721 2378 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:57:41.539850 kubelet[2378]: I0715 23:57:41.539731 2378 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:57:41.539850 kubelet[2378]: I0715 23:57:41.539752 2378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:57:41.539850 kubelet[2378]: I0715 23:57:41.539776 2378 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:41.542133 kubelet[2378]: I0715 23:57:41.542087 2378 policy_none.go:49] "None policy: Start" Jul 15 23:57:41.542989 kubelet[2378]: I0715 23:57:41.542936 2378 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:57:41.542989 kubelet[2378]: I0715 23:57:41.542966 2378 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:57:41.551457 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:57:41.572518 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:57:41.578266 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:57:41.589336 kubelet[2378]: E0715 23:57:41.589272 2378 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" not found" Jul 15 23:57:41.591574 kubelet[2378]: I0715 23:57:41.591297 2378 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:57:41.592419 kubelet[2378]: I0715 23:57:41.592159 2378 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:57:41.592419 kubelet[2378]: I0715 23:57:41.592193 2378 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:57:41.592685 kubelet[2378]: I0715 23:57:41.592659 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:57:41.596386 kubelet[2378]: E0715 23:57:41.596356 2378 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" not found" Jul 15 23:57:41.649977 systemd[1]: Created slice kubepods-burstable-podfcb3406524e8b65211e18c931d5f12a0.slice - libcontainer container kubepods-burstable-podfcb3406524e8b65211e18c931d5f12a0.slice. Jul 15 23:57:41.672256 systemd[1]: Created slice kubepods-burstable-pod9516aed1509c78ae7717f261218e75c6.slice - libcontainer container kubepods-burstable-pod9516aed1509c78ae7717f261218e75c6.slice. Jul 15 23:57:41.687197 systemd[1]: Created slice kubepods-burstable-pod4588734620bb9b10bd3f3b874ff75bba.slice - libcontainer container kubepods-burstable-pod4588734620bb9b10bd3f3b874ff75bba.slice. Jul 15 23:57:41.690670 kubelet[2378]: E0715 23:57:41.690280 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="400ms" Jul 15 23:57:41.697106 kubelet[2378]: I0715 23:57:41.697081 2378 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.697522 kubelet[2378]: E0715 23:57:41.697474 2378 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790385 kubelet[2378]: I0715 23:57:41.790294 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4588734620bb9b10bd3f3b874ff75bba-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"4588734620bb9b10bd3f3b874ff75bba\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790385 kubelet[2378]: I0715 23:57:41.790367 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb3406524e8b65211e18c931d5f12a0-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"fcb3406524e8b65211e18c931d5f12a0\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790385 kubelet[2378]: I0715 23:57:41.790405 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb3406524e8b65211e18c931d5f12a0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"fcb3406524e8b65211e18c931d5f12a0\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790717 kubelet[2378]: I0715 23:57:41.790435 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790717 kubelet[2378]: I0715 23:57:41.790462 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb3406524e8b65211e18c931d5f12a0-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"fcb3406524e8b65211e18c931d5f12a0\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790717 kubelet[2378]: I0715 23:57:41.790488 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790717 kubelet[2378]: I0715 23:57:41.790514 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790843 kubelet[2378]: I0715 23:57:41.790541 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.790843 kubelet[2378]: I0715 23:57:41.790572 2378 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.902981 kubelet[2378]: I0715 23:57:41.902843 2378 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.903303 kubelet[2378]: E0715 23:57:41.903242 2378 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:41.969112 containerd[1533]: time="2025-07-15T23:57:41.969041085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,Uid:fcb3406524e8b65211e18c931d5f12a0,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:41.989348 containerd[1533]: time="2025-07-15T23:57:41.988543002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,Uid:9516aed1509c78ae7717f261218e75c6,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:42.001795 containerd[1533]: time="2025-07-15T23:57:42.001742892Z" level=info msg="connecting to shim b6ae7ffb82d2b051c222852302bb59517454abab7e17e051b55b7245a5cdedb5" address="unix:///run/containerd/s/67502e5820ad042fd9ef3996534a3a1c47c6f68a2661f6c77abcc540ab4af05c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:42.003946 containerd[1533]: time="2025-07-15T23:57:42.003847351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,Uid:4588734620bb9b10bd3f3b874ff75bba,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:42.055835 containerd[1533]: time="2025-07-15T23:57:42.055784608Z" level=info msg="connecting to shim e2525fd730f6eb5574fe49b71e6bc68716ab1c9cbd98d47e635360d471c81682" address="unix:///run/containerd/s/38c6b80ba5bb333b4c786c35147c0ff99a558d9b7f77ee540c26b8f18653f6d6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:42.066471 systemd[1]: Started cri-containerd-b6ae7ffb82d2b051c222852302bb59517454abab7e17e051b55b7245a5cdedb5.scope - libcontainer container b6ae7ffb82d2b051c222852302bb59517454abab7e17e051b55b7245a5cdedb5. Jul 15 23:57:42.087898 containerd[1533]: time="2025-07-15T23:57:42.087811829Z" level=info msg="connecting to shim 7c271aab5f3c39663cf03f8c05d266ac213112032571b6e1e1ab6d3829154331" address="unix:///run/containerd/s/5709e07ba33318a4c38d3e7fcdf1d83603eb11879b08ba2839e92f66ecb394b2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:42.094788 kubelet[2378]: E0715 23:57:42.094729 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce?timeout=10s\": dial tcp 10.128.0.36:6443: connect: connection refused" interval="800ms" Jul 15 23:57:42.127165 systemd[1]: Started cri-containerd-e2525fd730f6eb5574fe49b71e6bc68716ab1c9cbd98d47e635360d471c81682.scope - libcontainer container e2525fd730f6eb5574fe49b71e6bc68716ab1c9cbd98d47e635360d471c81682. Jul 15 23:57:42.154062 systemd[1]: Started cri-containerd-7c271aab5f3c39663cf03f8c05d266ac213112032571b6e1e1ab6d3829154331.scope - libcontainer container 7c271aab5f3c39663cf03f8c05d266ac213112032571b6e1e1ab6d3829154331. Jul 15 23:57:42.245323 containerd[1533]: time="2025-07-15T23:57:42.244859634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,Uid:fcb3406524e8b65211e18c931d5f12a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6ae7ffb82d2b051c222852302bb59517454abab7e17e051b55b7245a5cdedb5\"" Jul 15 23:57:42.247990 kubelet[2378]: E0715 23:57:42.247944 2378 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c8" Jul 15 23:57:42.252234 containerd[1533]: time="2025-07-15T23:57:42.252121293Z" level=info msg="CreateContainer within sandbox \"b6ae7ffb82d2b051c222852302bb59517454abab7e17e051b55b7245a5cdedb5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:57:42.268168 containerd[1533]: time="2025-07-15T23:57:42.268138118Z" level=info msg="Container 2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:42.272675 containerd[1533]: time="2025-07-15T23:57:42.272556425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,Uid:9516aed1509c78ae7717f261218e75c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2525fd730f6eb5574fe49b71e6bc68716ab1c9cbd98d47e635360d471c81682\"" Jul 15 23:57:42.273667 containerd[1533]: time="2025-07-15T23:57:42.273593348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,Uid:4588734620bb9b10bd3f3b874ff75bba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c271aab5f3c39663cf03f8c05d266ac213112032571b6e1e1ab6d3829154331\"" Jul 15 23:57:42.275552 kubelet[2378]: E0715 23:57:42.275520 2378 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d551" Jul 15 23:57:42.275786 kubelet[2378]: E0715 23:57:42.275562 2378 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c8" Jul 15 23:57:42.277378 containerd[1533]: time="2025-07-15T23:57:42.277264358Z" level=info msg="CreateContainer within sandbox \"e2525fd730f6eb5574fe49b71e6bc68716ab1c9cbd98d47e635360d471c81682\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:57:42.278976 containerd[1533]: time="2025-07-15T23:57:42.278940207Z" level=info msg="CreateContainer within sandbox \"7c271aab5f3c39663cf03f8c05d266ac213112032571b6e1e1ab6d3829154331\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:57:42.288797 containerd[1533]: time="2025-07-15T23:57:42.288748344Z" level=info msg="Container fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:42.297204 containerd[1533]: time="2025-07-15T23:57:42.297167066Z" level=info msg="CreateContainer within sandbox \"b6ae7ffb82d2b051c222852302bb59517454abab7e17e051b55b7245a5cdedb5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6\"" Jul 15 23:57:42.298038 containerd[1533]: time="2025-07-15T23:57:42.297966956Z" level=info msg="StartContainer for \"2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6\"" Jul 15 23:57:42.299805 containerd[1533]: time="2025-07-15T23:57:42.299758793Z" level=info msg="connecting to shim 2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6" address="unix:///run/containerd/s/67502e5820ad042fd9ef3996534a3a1c47c6f68a2661f6c77abcc540ab4af05c" protocol=ttrpc version=3 Jul 15 23:57:42.306421 containerd[1533]: time="2025-07-15T23:57:42.306386825Z" level=info msg="CreateContainer within sandbox \"7c271aab5f3c39663cf03f8c05d266ac213112032571b6e1e1ab6d3829154331\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5\"" Jul 15 23:57:42.306924 containerd[1533]: time="2025-07-15T23:57:42.306573739Z" level=info msg="Container 212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:42.307488 containerd[1533]: time="2025-07-15T23:57:42.307456564Z" level=info msg="StartContainer for \"fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5\"" Jul 15 23:57:42.310465 kubelet[2378]: I0715 23:57:42.310027 2378 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:42.311039 kubelet[2378]: E0715 23:57:42.311003 2378 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.36:6443/api/v1/nodes\": dial tcp 10.128.0.36:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:42.311369 containerd[1533]: time="2025-07-15T23:57:42.311334209Z" level=info msg="connecting to shim fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5" address="unix:///run/containerd/s/5709e07ba33318a4c38d3e7fcdf1d83603eb11879b08ba2839e92f66ecb394b2" protocol=ttrpc version=3 Jul 15 23:57:42.320004 containerd[1533]: time="2025-07-15T23:57:42.319904932Z" level=info msg="CreateContainer within sandbox \"e2525fd730f6eb5574fe49b71e6bc68716ab1c9cbd98d47e635360d471c81682\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86\"" Jul 15 23:57:42.320901 containerd[1533]: time="2025-07-15T23:57:42.320400651Z" level=info msg="StartContainer for \"212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86\"" Jul 15 23:57:42.325542 containerd[1533]: time="2025-07-15T23:57:42.325505734Z" level=info msg="connecting to shim 212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86" address="unix:///run/containerd/s/38c6b80ba5bb333b4c786c35147c0ff99a558d9b7f77ee540c26b8f18653f6d6" protocol=ttrpc version=3 Jul 15 23:57:42.335127 systemd[1]: Started cri-containerd-2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6.scope - libcontainer container 2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6. Jul 15 23:57:42.358477 systemd[1]: Started cri-containerd-fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5.scope - libcontainer container fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5. Jul 15 23:57:42.377639 systemd[1]: Started cri-containerd-212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86.scope - libcontainer container 212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86. Jul 15 23:57:42.460822 containerd[1533]: time="2025-07-15T23:57:42.460667655Z" level=info msg="StartContainer for \"2cc19d334e2e085a5845463e2b8f35f61ed5664d0174c3ef29b4d9690b720cb6\" returns successfully" Jul 15 23:57:42.541118 containerd[1533]: time="2025-07-15T23:57:42.540965880Z" level=info msg="StartContainer for \"212ab1a7db95cc13b67d1d42624b6e41e06c054f73acbee8bf669aaa5c903c86\" returns successfully" Jul 15 23:57:42.543591 containerd[1533]: time="2025-07-15T23:57:42.543028639Z" level=info msg="StartContainer for \"fde3f33bf5eed4e4e46c5ae8d8307676146544fbdcf1645c5cd19e488427a7f5\" returns successfully" Jul 15 23:57:42.554261 kubelet[2378]: E0715 23:57:42.553042 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce.1852921dbfa154c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,UID:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,},FirstTimestamp:2025-07-15 23:57:41.467272393 +0000 UTC m=+0.843414875,LastTimestamp:2025-07-15 23:57:41.467272393 +0000 UTC m=+0.843414875,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce,}" Jul 15 23:57:43.119559 kubelet[2378]: I0715 23:57:43.119490 2378 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:45.541476 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:57:46.083300 kubelet[2378]: E0715 23:57:46.083204 2378 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" not found" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:46.164336 kubelet[2378]: I0715 23:57:46.164082 2378 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:46.464159 kubelet[2378]: I0715 23:57:46.464110 2378 apiserver.go:52] "Watching apiserver" Jul 15 23:57:46.489908 kubelet[2378]: I0715 23:57:46.489819 2378 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:57:48.088687 systemd[1]: Reload requested from client PID 2645 ('systemctl') (unit session-7.scope)... Jul 15 23:57:48.088708 systemd[1]: Reloading... Jul 15 23:57:48.240949 zram_generator::config[2689]: No configuration found. Jul 15 23:57:48.362515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:57:48.623571 systemd[1]: Reloading finished in 534 ms. Jul 15 23:57:48.658706 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:48.670111 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:57:48.670523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:48.670603 systemd[1]: kubelet.service: Consumed 1.342s CPU time, 130.7M memory peak. Jul 15 23:57:48.673558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:57:49.055725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:57:49.070509 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:57:49.147175 kubelet[2737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:49.147767 kubelet[2737]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:57:49.147916 kubelet[2737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:57:49.148191 kubelet[2737]: I0715 23:57:49.148153 2737 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:57:49.161634 kubelet[2737]: I0715 23:57:49.161584 2737 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:57:49.161809 kubelet[2737]: I0715 23:57:49.161792 2737 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:57:49.162303 kubelet[2737]: I0715 23:57:49.162278 2737 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:57:49.164916 kubelet[2737]: I0715 23:57:49.164888 2737 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:57:49.169823 kubelet[2737]: I0715 23:57:49.169066 2737 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:57:49.176753 kubelet[2737]: I0715 23:57:49.176696 2737 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:57:49.189366 kubelet[2737]: I0715 23:57:49.187047 2737 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:57:49.189366 kubelet[2737]: I0715 23:57:49.187231 2737 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:57:49.189366 kubelet[2737]: I0715 23:57:49.187425 2737 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:57:49.190741 kubelet[2737]: I0715 23:57:49.187483 2737 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:57:49.191535 kubelet[2737]: I0715 23:57:49.190962 2737 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:57:49.191535 kubelet[2737]: I0715 23:57:49.191008 2737 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:57:49.191535 kubelet[2737]: I0715 23:57:49.191053 2737 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:49.191535 kubelet[2737]: I0715 23:57:49.191234 2737 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:57:49.193862 kubelet[2737]: I0715 23:57:49.191256 2737 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:57:49.193862 kubelet[2737]: I0715 23:57:49.191842 2737 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:57:49.193862 kubelet[2737]: I0715 23:57:49.191863 2737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:57:49.195759 kubelet[2737]: I0715 23:57:49.195318 2737 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:57:49.196009 kubelet[2737]: I0715 23:57:49.195984 2737 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:57:49.196569 kubelet[2737]: I0715 23:57:49.196541 2737 server.go:1274] "Started kubelet" Jul 15 23:57:49.205838 kubelet[2737]: I0715 23:57:49.205674 2737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:57:49.220911 kubelet[2737]: I0715 23:57:49.219757 2737 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:57:49.222906 kubelet[2737]: I0715 23:57:49.221340 2737 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:57:49.223013 kubelet[2737]: I0715 23:57:49.222968 2737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:57:49.223249 kubelet[2737]: I0715 23:57:49.223225 2737 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:57:49.223632 kubelet[2737]: I0715 23:57:49.223605 2737 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:57:49.231467 kubelet[2737]: I0715 23:57:49.229485 2737 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:57:49.231467 kubelet[2737]: E0715 23:57:49.229771 2737 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" not found" Jul 15 23:57:49.240193 kubelet[2737]: I0715 23:57:49.240171 2737 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:57:49.240519 kubelet[2737]: I0715 23:57:49.240502 2737 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:57:49.246704 kubelet[2737]: I0715 23:57:49.246310 2737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:57:49.251286 kubelet[2737]: I0715 23:57:49.251261 2737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:57:49.251426 kubelet[2737]: I0715 23:57:49.251410 2737 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:57:49.251563 kubelet[2737]: I0715 23:57:49.251548 2737 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:57:49.251731 kubelet[2737]: E0715 23:57:49.251698 2737 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:57:49.265438 kubelet[2737]: I0715 23:57:49.265381 2737 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:57:49.265774 kubelet[2737]: I0715 23:57:49.265737 2737 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:57:49.270984 kubelet[2737]: I0715 23:57:49.270694 2737 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:57:49.285033 kubelet[2737]: E0715 23:57:49.284711 2737 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.350595 2737 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.350619 2737 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.350645 2737 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.350869 2737 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.350907 2737 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.350937 2737 policy_none.go:49] "None policy: Start" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.351700 2737 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.351726 2737 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:57:49.351980 kubelet[2737]: I0715 23:57:49.351950 2737 state_mem.go:75] "Updated machine memory state" Jul 15 23:57:49.353064 kubelet[2737]: E0715 23:57:49.352939 2737 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:57:49.369912 kubelet[2737]: I0715 23:57:49.369605 2737 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:57:49.369912 kubelet[2737]: I0715 23:57:49.369836 2737 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:57:49.373714 kubelet[2737]: I0715 23:57:49.369855 2737 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:57:49.377992 kubelet[2737]: I0715 23:57:49.377958 2737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:57:49.510015 kubelet[2737]: I0715 23:57:49.509950 2737 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.520712 kubelet[2737]: I0715 23:57:49.520516 2737 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.521071 kubelet[2737]: I0715 23:57:49.520998 2737 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.566422 kubelet[2737]: W0715 23:57:49.565670 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:57:49.568686 kubelet[2737]: W0715 23:57:49.568388 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:57:49.568686 kubelet[2737]: W0715 23:57:49.568564 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:57:49.641797 kubelet[2737]: I0715 23:57:49.641524 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.641797 kubelet[2737]: I0715 23:57:49.641616 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.641797 kubelet[2737]: I0715 23:57:49.641656 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.641797 kubelet[2737]: I0715 23:57:49.641699 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4588734620bb9b10bd3f3b874ff75bba-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"4588734620bb9b10bd3f3b874ff75bba\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.643186 kubelet[2737]: I0715 23:57:49.641754 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb3406524e8b65211e18c931d5f12a0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"fcb3406524e8b65211e18c931d5f12a0\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.643186 kubelet[2737]: I0715 23:57:49.641830 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb3406524e8b65211e18c931d5f12a0-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"fcb3406524e8b65211e18c931d5f12a0\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.643186 kubelet[2737]: I0715 23:57:49.641864 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.643186 kubelet[2737]: I0715 23:57:49.641936 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9516aed1509c78ae7717f261218e75c6-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"9516aed1509c78ae7717f261218e75c6\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:49.643389 kubelet[2737]: I0715 23:57:49.641985 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb3406524e8b65211e18c931d5f12a0-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" (UID: \"fcb3406524e8b65211e18c931d5f12a0\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:50.192864 kubelet[2737]: I0715 23:57:50.192804 2737 apiserver.go:52] "Watching apiserver" Jul 15 23:57:50.241495 kubelet[2737]: I0715 23:57:50.241437 2737 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:57:50.330096 kubelet[2737]: W0715 23:57:50.330044 2737 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:57:50.330347 kubelet[2737]: E0715 23:57:50.330126 2737 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:57:50.361120 kubelet[2737]: I0715 23:57:50.361040 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" podStartSLOduration=1.36101974 podStartE2EDuration="1.36101974s" podCreationTimestamp="2025-07-15 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:50.360463792 +0000 UTC m=+1.282110696" watchObservedRunningTime="2025-07-15 23:57:50.36101974 +0000 UTC m=+1.282666668" Jul 15 23:57:50.393401 kubelet[2737]: I0715 23:57:50.393209 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" podStartSLOduration=1.393181788 podStartE2EDuration="1.393181788s" podCreationTimestamp="2025-07-15 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:50.37628748 +0000 UTC m=+1.297934376" watchObservedRunningTime="2025-07-15 23:57:50.393181788 +0000 UTC m=+1.314828687" Jul 15 23:57:50.408714 kubelet[2737]: I0715 23:57:50.408111 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" podStartSLOduration=1.4080871959999999 podStartE2EDuration="1.408087196s" podCreationTimestamp="2025-07-15 23:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:50.395675284 +0000 UTC m=+1.317322187" watchObservedRunningTime="2025-07-15 23:57:50.408087196 +0000 UTC m=+1.329734095" Jul 15 23:57:52.995134 kubelet[2737]: I0715 23:57:52.995086 2737 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:57:52.996585 containerd[1533]: time="2025-07-15T23:57:52.996524410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:57:52.998307 kubelet[2737]: I0715 23:57:52.997682 2737 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:57:53.867339 kubelet[2737]: I0715 23:57:53.867130 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8l7\" (UniqueName: \"kubernetes.io/projected/df524804-985e-4702-aba3-91dc5e805af3-kube-api-access-8t8l7\") pod \"kube-proxy-x8dfp\" (UID: \"df524804-985e-4702-aba3-91dc5e805af3\") " pod="kube-system/kube-proxy-x8dfp" Jul 15 23:57:53.867339 kubelet[2737]: I0715 23:57:53.867190 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df524804-985e-4702-aba3-91dc5e805af3-kube-proxy\") pod \"kube-proxy-x8dfp\" (UID: \"df524804-985e-4702-aba3-91dc5e805af3\") " pod="kube-system/kube-proxy-x8dfp" Jul 15 23:57:53.867339 kubelet[2737]: I0715 23:57:53.867222 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df524804-985e-4702-aba3-91dc5e805af3-xtables-lock\") pod \"kube-proxy-x8dfp\" (UID: \"df524804-985e-4702-aba3-91dc5e805af3\") " pod="kube-system/kube-proxy-x8dfp" Jul 15 23:57:53.867339 kubelet[2737]: I0715 23:57:53.867250 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df524804-985e-4702-aba3-91dc5e805af3-lib-modules\") pod \"kube-proxy-x8dfp\" (UID: \"df524804-985e-4702-aba3-91dc5e805af3\") " pod="kube-system/kube-proxy-x8dfp" Jul 15 23:57:53.874058 systemd[1]: Created slice kubepods-besteffort-poddf524804_985e_4702_aba3_91dc5e805af3.slice - libcontainer container kubepods-besteffort-poddf524804_985e_4702_aba3_91dc5e805af3.slice. Jul 15 23:57:54.161334 systemd[1]: Created slice kubepods-besteffort-pode26e85db_07f6_4540_82f7_26cebf4dbc93.slice - libcontainer container kubepods-besteffort-pode26e85db_07f6_4540_82f7_26cebf4dbc93.slice. Jul 15 23:57:54.169206 kubelet[2737]: I0715 23:57:54.169162 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e26e85db-07f6-4540-82f7-26cebf4dbc93-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-qfn57\" (UID: \"e26e85db-07f6-4540-82f7-26cebf4dbc93\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qfn57" Jul 15 23:57:54.169206 kubelet[2737]: I0715 23:57:54.169217 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnbpf\" (UniqueName: \"kubernetes.io/projected/e26e85db-07f6-4540-82f7-26cebf4dbc93-kube-api-access-cnbpf\") pod \"tigera-operator-5bf8dfcb4-qfn57\" (UID: \"e26e85db-07f6-4540-82f7-26cebf4dbc93\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-qfn57" Jul 15 23:57:54.182784 containerd[1533]: time="2025-07-15T23:57:54.182737427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8dfp,Uid:df524804-985e-4702-aba3-91dc5e805af3,Namespace:kube-system,Attempt:0,}" Jul 15 23:57:54.211615 containerd[1533]: time="2025-07-15T23:57:54.211505538Z" level=info msg="connecting to shim 39d9609e65a1571a05b2be63acb0d6df8cc88e90aae869abb7511517ad8ad078" address="unix:///run/containerd/s/10d9d00ec73834f5cc466affcc2d2e94f54cac78081bfc63ee6e05d0a70a9f5b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:54.253095 systemd[1]: Started cri-containerd-39d9609e65a1571a05b2be63acb0d6df8cc88e90aae869abb7511517ad8ad078.scope - libcontainer container 39d9609e65a1571a05b2be63acb0d6df8cc88e90aae869abb7511517ad8ad078. Jul 15 23:57:54.305570 containerd[1533]: time="2025-07-15T23:57:54.305522883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8dfp,Uid:df524804-985e-4702-aba3-91dc5e805af3,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d9609e65a1571a05b2be63acb0d6df8cc88e90aae869abb7511517ad8ad078\"" Jul 15 23:57:54.311737 containerd[1533]: time="2025-07-15T23:57:54.311459284Z" level=info msg="CreateContainer within sandbox \"39d9609e65a1571a05b2be63acb0d6df8cc88e90aae869abb7511517ad8ad078\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:57:54.324328 containerd[1533]: time="2025-07-15T23:57:54.324282807Z" level=info msg="Container eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:54.343188 containerd[1533]: time="2025-07-15T23:57:54.343092335Z" level=info msg="CreateContainer within sandbox \"39d9609e65a1571a05b2be63acb0d6df8cc88e90aae869abb7511517ad8ad078\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff\"" Jul 15 23:57:54.343818 containerd[1533]: time="2025-07-15T23:57:54.343752809Z" level=info msg="StartContainer for \"eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff\"" Jul 15 23:57:54.347176 containerd[1533]: time="2025-07-15T23:57:54.347123524Z" level=info msg="connecting to shim eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff" address="unix:///run/containerd/s/10d9d00ec73834f5cc466affcc2d2e94f54cac78081bfc63ee6e05d0a70a9f5b" protocol=ttrpc version=3 Jul 15 23:57:54.375160 systemd[1]: Started cri-containerd-eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff.scope - libcontainer container eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff. Jul 15 23:57:54.437462 containerd[1533]: time="2025-07-15T23:57:54.436537711Z" level=info msg="StartContainer for \"eb2cd4fa868bdc745bc4e0012f94465ba336b924446cb84f39024eb95d0aedff\" returns successfully" Jul 15 23:57:54.468444 containerd[1533]: time="2025-07-15T23:57:54.468105722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qfn57,Uid:e26e85db-07f6-4540-82f7-26cebf4dbc93,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:57:54.500925 containerd[1533]: time="2025-07-15T23:57:54.499757112Z" level=info msg="connecting to shim fab94f176c248dbd738468c43bdcee63fbba61a7b71978075133cc5eebcafc0e" address="unix:///run/containerd/s/5d5ba3637e6f65a34d46553a02c12c7eb5012808cb36f6ed3fb68a31d75d2167" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:57:54.540141 systemd[1]: Started cri-containerd-fab94f176c248dbd738468c43bdcee63fbba61a7b71978075133cc5eebcafc0e.scope - libcontainer container fab94f176c248dbd738468c43bdcee63fbba61a7b71978075133cc5eebcafc0e. Jul 15 23:57:54.644470 containerd[1533]: time="2025-07-15T23:57:54.644376686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-qfn57,Uid:e26e85db-07f6-4540-82f7-26cebf4dbc93,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fab94f176c248dbd738468c43bdcee63fbba61a7b71978075133cc5eebcafc0e\"" Jul 15 23:57:54.648699 containerd[1533]: time="2025-07-15T23:57:54.648666098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:57:55.359077 kubelet[2737]: I0715 23:57:55.358937 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x8dfp" podStartSLOduration=2.356772053 podStartE2EDuration="2.356772053s" podCreationTimestamp="2025-07-15 23:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:57:55.354124646 +0000 UTC m=+6.275771552" watchObservedRunningTime="2025-07-15 23:57:55.356772053 +0000 UTC m=+6.278418957" Jul 15 23:57:55.789075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366601330.mount: Deactivated successfully. Jul 15 23:57:57.586496 containerd[1533]: time="2025-07-15T23:57:57.586403965Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:57.587906 containerd[1533]: time="2025-07-15T23:57:57.587806228Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 23:57:57.589495 containerd[1533]: time="2025-07-15T23:57:57.589365476Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:57.592360 containerd[1533]: time="2025-07-15T23:57:57.592290961Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:57:57.594400 containerd[1533]: time="2025-07-15T23:57:57.593422431Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.944557518s" Jul 15 23:57:57.594400 containerd[1533]: time="2025-07-15T23:57:57.593467921Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 23:57:57.598396 containerd[1533]: time="2025-07-15T23:57:57.598220400Z" level=info msg="CreateContainer within sandbox \"fab94f176c248dbd738468c43bdcee63fbba61a7b71978075133cc5eebcafc0e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:57:57.611912 containerd[1533]: time="2025-07-15T23:57:57.608502607Z" level=info msg="Container ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:57:57.619772 containerd[1533]: time="2025-07-15T23:57:57.619724998Z" level=info msg="CreateContainer within sandbox \"fab94f176c248dbd738468c43bdcee63fbba61a7b71978075133cc5eebcafc0e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d\"" Jul 15 23:57:57.620547 containerd[1533]: time="2025-07-15T23:57:57.620452377Z" level=info msg="StartContainer for \"ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d\"" Jul 15 23:57:57.622150 containerd[1533]: time="2025-07-15T23:57:57.622114007Z" level=info msg="connecting to shim ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d" address="unix:///run/containerd/s/5d5ba3637e6f65a34d46553a02c12c7eb5012808cb36f6ed3fb68a31d75d2167" protocol=ttrpc version=3 Jul 15 23:57:57.655107 systemd[1]: Started cri-containerd-ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d.scope - libcontainer container ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d. Jul 15 23:57:57.696859 containerd[1533]: time="2025-07-15T23:57:57.696813898Z" level=info msg="StartContainer for \"ec7958a7ff2ec87e98c0c9e58b08a3711a223034b6d00b2c070c8d39d470945d\" returns successfully" Jul 15 23:57:58.368481 kubelet[2737]: I0715 23:57:58.368325 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-qfn57" podStartSLOduration=1.42063029 podStartE2EDuration="4.368299507s" podCreationTimestamp="2025-07-15 23:57:54 +0000 UTC" firstStartedPulling="2025-07-15 23:57:54.647368623 +0000 UTC m=+5.569015514" lastFinishedPulling="2025-07-15 23:57:57.595037851 +0000 UTC m=+8.516684731" observedRunningTime="2025-07-15 23:57:58.367480007 +0000 UTC m=+9.289126985" watchObservedRunningTime="2025-07-15 23:57:58.368299507 +0000 UTC m=+9.289946409" Jul 15 23:57:59.176718 update_engine[1514]: I20250715 23:57:59.175933 1514 update_attempter.cc:509] Updating boot flags... Jul 15 23:58:04.968115 sudo[1823]: pam_unix(sudo:session): session closed for user root Jul 15 23:58:05.013415 sshd[1822]: Connection closed by 139.178.89.65 port 44510 Jul 15 23:58:05.016552 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:05.024260 systemd[1]: sshd@6-10.128.0.36:22-139.178.89.65:44510.service: Deactivated successfully. Jul 15 23:58:05.030452 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:58:05.031358 systemd[1]: session-7.scope: Consumed 7.594s CPU time, 225.1M memory peak. Jul 15 23:58:05.037113 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:58:05.042223 systemd-logind[1507]: Removed session 7. Jul 15 23:58:10.845540 systemd[1]: Created slice kubepods-besteffort-pod368f5f40_16ff_4e11_ab4b_6d6603a27bee.slice - libcontainer container kubepods-besteffort-pod368f5f40_16ff_4e11_ab4b_6d6603a27bee.slice. Jul 15 23:58:10.975055 kubelet[2737]: I0715 23:58:10.974795 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/368f5f40-16ff-4e11-ab4b-6d6603a27bee-typha-certs\") pod \"calico-typha-7c88b878c9-fjqg7\" (UID: \"368f5f40-16ff-4e11-ab4b-6d6603a27bee\") " pod="calico-system/calico-typha-7c88b878c9-fjqg7" Jul 15 23:58:10.975055 kubelet[2737]: I0715 23:58:10.974857 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/368f5f40-16ff-4e11-ab4b-6d6603a27bee-tigera-ca-bundle\") pod \"calico-typha-7c88b878c9-fjqg7\" (UID: \"368f5f40-16ff-4e11-ab4b-6d6603a27bee\") " pod="calico-system/calico-typha-7c88b878c9-fjqg7" Jul 15 23:58:10.975055 kubelet[2737]: I0715 23:58:10.974916 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mgt\" (UniqueName: \"kubernetes.io/projected/368f5f40-16ff-4e11-ab4b-6d6603a27bee-kube-api-access-x7mgt\") pod \"calico-typha-7c88b878c9-fjqg7\" (UID: \"368f5f40-16ff-4e11-ab4b-6d6603a27bee\") " pod="calico-system/calico-typha-7c88b878c9-fjqg7" Jul 15 23:58:11.156268 containerd[1533]: time="2025-07-15T23:58:11.156182595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c88b878c9-fjqg7,Uid:368f5f40-16ff-4e11-ab4b-6d6603a27bee,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:11.192955 containerd[1533]: time="2025-07-15T23:58:11.192583966Z" level=info msg="connecting to shim 82904045ae2802836d761634a8f09104749673ceff667a61b452029c31aaba5b" address="unix:///run/containerd/s/ee32e2a3109c02a7b97469d4431dbb610351977e3bddf57da884037df4786780" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:11.241148 systemd[1]: Started cri-containerd-82904045ae2802836d761634a8f09104749673ceff667a61b452029c31aaba5b.scope - libcontainer container 82904045ae2802836d761634a8f09104749673ceff667a61b452029c31aaba5b. Jul 15 23:58:11.277319 kubelet[2737]: I0715 23:58:11.277252 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-xtables-lock\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.277319 kubelet[2737]: I0715 23:58:11.277306 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-cni-log-dir\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.277523 kubelet[2737]: I0715 23:58:11.277334 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-lib-modules\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.277523 kubelet[2737]: I0715 23:58:11.277357 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-cni-net-dir\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.277523 kubelet[2737]: I0715 23:58:11.277378 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-policysync\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.277523 kubelet[2737]: I0715 23:58:11.277406 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-var-lib-calico\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.277523 kubelet[2737]: I0715 23:58:11.277433 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-flexvol-driver-host\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.279631 kubelet[2737]: I0715 23:58:11.277459 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a9ea76da-a654-43a2-bd32-5980edb02f52-node-certs\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.279631 kubelet[2737]: I0715 23:58:11.277489 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-cni-bin-dir\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.279631 kubelet[2737]: I0715 23:58:11.277517 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9ea76da-a654-43a2-bd32-5980edb02f52-tigera-ca-bundle\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.279631 kubelet[2737]: I0715 23:58:11.277542 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a9ea76da-a654-43a2-bd32-5980edb02f52-var-run-calico\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.279631 kubelet[2737]: I0715 23:58:11.277572 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h674j\" (UniqueName: \"kubernetes.io/projected/a9ea76da-a654-43a2-bd32-5980edb02f52-kube-api-access-h674j\") pod \"calico-node-9w7x7\" (UID: \"a9ea76da-a654-43a2-bd32-5980edb02f52\") " pod="calico-system/calico-node-9w7x7" Jul 15 23:58:11.294967 systemd[1]: Created slice kubepods-besteffort-poda9ea76da_a654_43a2_bd32_5980edb02f52.slice - libcontainer container kubepods-besteffort-poda9ea76da_a654_43a2_bd32_5980edb02f52.slice. Jul 15 23:58:11.396906 kubelet[2737]: E0715 23:58:11.395990 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.396906 kubelet[2737]: W0715 23:58:11.396024 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.396906 kubelet[2737]: E0715 23:58:11.396066 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.422387 kubelet[2737]: E0715 23:58:11.421283 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.422387 kubelet[2737]: W0715 23:58:11.422312 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.423202 kubelet[2737]: E0715 23:58:11.422347 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.428642 containerd[1533]: time="2025-07-15T23:58:11.428600388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c88b878c9-fjqg7,Uid:368f5f40-16ff-4e11-ab4b-6d6603a27bee,Namespace:calico-system,Attempt:0,} returns sandbox id \"82904045ae2802836d761634a8f09104749673ceff667a61b452029c31aaba5b\"" Jul 15 23:58:11.434035 containerd[1533]: time="2025-07-15T23:58:11.433998345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:58:11.537156 kubelet[2737]: E0715 23:58:11.536960 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h292n" podUID="dbc77e5e-b79b-4e32-be42-a7d09e567054" Jul 15 23:58:11.580345 kubelet[2737]: E0715 23:58:11.580102 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.580345 kubelet[2737]: W0715 23:58:11.580132 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.580345 kubelet[2737]: E0715 23:58:11.580162 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.582175 kubelet[2737]: E0715 23:58:11.582108 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.582175 kubelet[2737]: W0715 23:58:11.582133 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.582517 kubelet[2737]: E0715 23:58:11.582157 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.582903 kubelet[2737]: E0715 23:58:11.582796 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.582903 kubelet[2737]: W0715 23:58:11.582822 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.582903 kubelet[2737]: E0715 23:58:11.582843 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.583569 kubelet[2737]: E0715 23:58:11.583509 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.583569 kubelet[2737]: W0715 23:58:11.583529 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.583740 kubelet[2737]: E0715 23:58:11.583549 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.584826 kubelet[2737]: E0715 23:58:11.584792 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.584990 kubelet[2737]: W0715 23:58:11.584969 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.585202 kubelet[2737]: E0715 23:58:11.585151 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.585444 kubelet[2737]: I0715 23:58:11.585408 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbc77e5e-b79b-4e32-be42-a7d09e567054-kubelet-dir\") pod \"csi-node-driver-h292n\" (UID: \"dbc77e5e-b79b-4e32-be42-a7d09e567054\") " pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:11.585759 kubelet[2737]: E0715 23:58:11.585720 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.585759 kubelet[2737]: W0715 23:58:11.585739 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.587113 kubelet[2737]: E0715 23:58:11.587093 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.587461 kubelet[2737]: E0715 23:58:11.587410 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.587461 kubelet[2737]: W0715 23:58:11.587429 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.587666 kubelet[2737]: E0715 23:58:11.587612 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.588132 kubelet[2737]: E0715 23:58:11.588093 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.588132 kubelet[2737]: W0715 23:58:11.588110 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.588366 kubelet[2737]: E0715 23:58:11.588305 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.588799 kubelet[2737]: E0715 23:58:11.588756 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.589101 kubelet[2737]: W0715 23:58:11.588954 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.589101 kubelet[2737]: E0715 23:58:11.588980 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.590224 kubelet[2737]: E0715 23:58:11.590201 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.590224 kubelet[2737]: W0715 23:58:11.590223 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.590366 kubelet[2737]: E0715 23:58:11.590242 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.590543 kubelet[2737]: E0715 23:58:11.590526 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.590543 kubelet[2737]: W0715 23:58:11.590542 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.590679 kubelet[2737]: E0715 23:58:11.590558 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.590845 kubelet[2737]: E0715 23:58:11.590827 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.590963 kubelet[2737]: W0715 23:58:11.590845 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.590963 kubelet[2737]: E0715 23:58:11.590860 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.592021 kubelet[2737]: E0715 23:58:11.591991 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.592021 kubelet[2737]: W0715 23:58:11.592011 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.592160 kubelet[2737]: E0715 23:58:11.592029 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.592309 kubelet[2737]: E0715 23:58:11.592283 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.592309 kubelet[2737]: W0715 23:58:11.592302 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.592825 kubelet[2737]: E0715 23:58:11.592318 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.592825 kubelet[2737]: E0715 23:58:11.592563 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.592825 kubelet[2737]: W0715 23:58:11.592575 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.592825 kubelet[2737]: E0715 23:58:11.592589 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.593118 kubelet[2737]: E0715 23:58:11.592831 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.593118 kubelet[2737]: W0715 23:58:11.592848 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.593118 kubelet[2737]: E0715 23:58:11.592864 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.593589 kubelet[2737]: E0715 23:58:11.593243 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.593589 kubelet[2737]: W0715 23:58:11.593266 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.593589 kubelet[2737]: E0715 23:58:11.593282 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.595174 kubelet[2737]: E0715 23:58:11.595153 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.595174 kubelet[2737]: W0715 23:58:11.595173 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.595313 kubelet[2737]: E0715 23:58:11.595190 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.596053 kubelet[2737]: E0715 23:58:11.595458 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.596053 kubelet[2737]: W0715 23:58:11.595470 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.596053 kubelet[2737]: E0715 23:58:11.595485 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.596053 kubelet[2737]: E0715 23:58:11.595723 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.596053 kubelet[2737]: W0715 23:58:11.595734 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.596053 kubelet[2737]: E0715 23:58:11.595748 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.596053 kubelet[2737]: E0715 23:58:11.595995 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.596053 kubelet[2737]: W0715 23:58:11.596010 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.596053 kubelet[2737]: E0715 23:58:11.596025 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.596540 kubelet[2737]: E0715 23:58:11.596287 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.596540 kubelet[2737]: W0715 23:58:11.596300 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.596540 kubelet[2737]: E0715 23:58:11.596315 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.596696 kubelet[2737]: E0715 23:58:11.596562 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.596696 kubelet[2737]: W0715 23:58:11.596574 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.596696 kubelet[2737]: E0715 23:58:11.596589 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.603207 containerd[1533]: time="2025-07-15T23:58:11.603135341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9w7x7,Uid:a9ea76da-a654-43a2-bd32-5980edb02f52,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:11.638284 containerd[1533]: time="2025-07-15T23:58:11.638203304Z" level=info msg="connecting to shim 4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a" address="unix:///run/containerd/s/d6964cb4372d4166673e06d2956a594ee0c16017f99d6c1e542212c47000cffb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:11.685317 systemd[1]: Started cri-containerd-4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a.scope - libcontainer container 4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a. Jul 15 23:58:11.689980 kubelet[2737]: E0715 23:58:11.689846 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.689980 kubelet[2737]: W0715 23:58:11.689906 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.689980 kubelet[2737]: E0715 23:58:11.689936 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.690925 kubelet[2737]: I0715 23:58:11.690627 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dbc77e5e-b79b-4e32-be42-a7d09e567054-socket-dir\") pod \"csi-node-driver-h292n\" (UID: \"dbc77e5e-b79b-4e32-be42-a7d09e567054\") " pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:11.693490 kubelet[2737]: E0715 23:58:11.693296 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.693490 kubelet[2737]: W0715 23:58:11.693317 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.694233 kubelet[2737]: E0715 23:58:11.693709 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.696276 kubelet[2737]: E0715 23:58:11.696171 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.696276 kubelet[2737]: W0715 23:58:11.696217 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.697296 kubelet[2737]: E0715 23:58:11.697222 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.701738 kubelet[2737]: E0715 23:58:11.699144 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.702324 kubelet[2737]: W0715 23:58:11.701953 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.702324 kubelet[2737]: E0715 23:58:11.702001 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.702324 kubelet[2737]: I0715 23:58:11.702050 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dbc77e5e-b79b-4e32-be42-a7d09e567054-varrun\") pod \"csi-node-driver-h292n\" (UID: \"dbc77e5e-b79b-4e32-be42-a7d09e567054\") " pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:11.702679 kubelet[2737]: E0715 23:58:11.702617 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.702679 kubelet[2737]: W0715 23:58:11.702651 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.703335 kubelet[2737]: E0715 23:58:11.702928 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.703335 kubelet[2737]: I0715 23:58:11.703261 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q8zk\" (UniqueName: \"kubernetes.io/projected/dbc77e5e-b79b-4e32-be42-a7d09e567054-kube-api-access-7q8zk\") pod \"csi-node-driver-h292n\" (UID: \"dbc77e5e-b79b-4e32-be42-a7d09e567054\") " pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:11.704979 kubelet[2737]: E0715 23:58:11.704944 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.705174 kubelet[2737]: W0715 23:58:11.705101 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.705174 kubelet[2737]: E0715 23:58:11.705138 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.705370 kubelet[2737]: I0715 23:58:11.705306 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dbc77e5e-b79b-4e32-be42-a7d09e567054-registration-dir\") pod \"csi-node-driver-h292n\" (UID: \"dbc77e5e-b79b-4e32-be42-a7d09e567054\") " pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:11.707112 kubelet[2737]: E0715 23:58:11.707087 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.707112 kubelet[2737]: W0715 23:58:11.707111 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.708043 kubelet[2737]: E0715 23:58:11.707133 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.711371 kubelet[2737]: E0715 23:58:11.711223 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.711371 kubelet[2737]: W0715 23:58:11.711247 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.711371 kubelet[2737]: E0715 23:58:11.711279 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.713012 kubelet[2737]: E0715 23:58:11.712974 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.713012 kubelet[2737]: W0715 23:58:11.713001 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.713183 kubelet[2737]: E0715 23:58:11.713021 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.713804 kubelet[2737]: E0715 23:58:11.713775 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.713804 kubelet[2737]: W0715 23:58:11.713801 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.713970 kubelet[2737]: E0715 23:58:11.713821 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.716412 kubelet[2737]: E0715 23:58:11.716385 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.716412 kubelet[2737]: W0715 23:58:11.716412 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.716579 kubelet[2737]: E0715 23:58:11.716431 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.717051 kubelet[2737]: E0715 23:58:11.717026 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.717051 kubelet[2737]: W0715 23:58:11.717050 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.717186 kubelet[2737]: E0715 23:58:11.717081 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.719991 kubelet[2737]: E0715 23:58:11.719957 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.720092 kubelet[2737]: W0715 23:58:11.719986 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.720092 kubelet[2737]: E0715 23:58:11.720013 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.722274 kubelet[2737]: E0715 23:58:11.722243 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.722570 kubelet[2737]: W0715 23:58:11.722276 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.722570 kubelet[2737]: E0715 23:58:11.722296 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.723092 kubelet[2737]: E0715 23:58:11.723067 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.723092 kubelet[2737]: W0715 23:58:11.723089 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.723355 kubelet[2737]: E0715 23:58:11.723108 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.724498 kubelet[2737]: E0715 23:58:11.724467 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.724498 kubelet[2737]: W0715 23:58:11.724494 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.725071 kubelet[2737]: E0715 23:58:11.724523 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.725259 kubelet[2737]: E0715 23:58:11.725149 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.725259 kubelet[2737]: W0715 23:58:11.725169 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.725259 kubelet[2737]: E0715 23:58:11.725196 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.809284 kubelet[2737]: E0715 23:58:11.808957 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.809284 kubelet[2737]: W0715 23:58:11.808987 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.809284 kubelet[2737]: E0715 23:58:11.809017 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.809824 kubelet[2737]: E0715 23:58:11.809578 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.809824 kubelet[2737]: W0715 23:58:11.809597 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.809824 kubelet[2737]: E0715 23:58:11.809624 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.811111 kubelet[2737]: E0715 23:58:11.810328 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.811111 kubelet[2737]: W0715 23:58:11.810344 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.811111 kubelet[2737]: E0715 23:58:11.810512 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.811372 containerd[1533]: time="2025-07-15T23:58:11.811328392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9w7x7,Uid:a9ea76da-a654-43a2-bd32-5980edb02f52,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\"" Jul 15 23:58:11.811686 kubelet[2737]: E0715 23:58:11.811582 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.811686 kubelet[2737]: W0715 23:58:11.811604 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.811686 kubelet[2737]: E0715 23:58:11.811626 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.812642 kubelet[2737]: E0715 23:58:11.812585 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.812642 kubelet[2737]: W0715 23:58:11.812604 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.812642 kubelet[2737]: E0715 23:58:11.812624 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.813411 kubelet[2737]: E0715 23:58:11.813302 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.813411 kubelet[2737]: W0715 23:58:11.813349 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.814153 kubelet[2737]: E0715 23:58:11.813992 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.814153 kubelet[2737]: W0715 23:58:11.814011 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.814457 kubelet[2737]: E0715 23:58:11.814435 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.815174 kubelet[2737]: E0715 23:58:11.814743 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.815484 kubelet[2737]: W0715 23:58:11.815377 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.816591 kubelet[2737]: E0715 23:58:11.816412 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.816591 kubelet[2737]: W0715 23:58:11.816431 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.816926 kubelet[2737]: E0715 23:58:11.816899 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.817124 kubelet[2737]: E0715 23:58:11.814755 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.817290 kubelet[2737]: E0715 23:58:11.817272 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.817684 kubelet[2737]: E0715 23:58:11.817408 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.817684 kubelet[2737]: W0715 23:58:11.817536 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.817684 kubelet[2737]: E0715 23:58:11.817554 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.818222 kubelet[2737]: E0715 23:58:11.818184 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.818222 kubelet[2737]: W0715 23:58:11.818202 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.818510 kubelet[2737]: E0715 23:58:11.818399 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.818870 kubelet[2737]: E0715 23:58:11.818828 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.818870 kubelet[2737]: W0715 23:58:11.818848 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.819212 kubelet[2737]: E0715 23:58:11.819109 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.819759 kubelet[2737]: E0715 23:58:11.819535 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.819913 kubelet[2737]: W0715 23:58:11.819856 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.820109 kubelet[2737]: E0715 23:58:11.820090 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.820486 kubelet[2737]: E0715 23:58:11.820444 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.820486 kubelet[2737]: W0715 23:58:11.820463 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.820716 kubelet[2737]: E0715 23:58:11.820697 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.821166 kubelet[2737]: E0715 23:58:11.821119 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.821166 kubelet[2737]: W0715 23:58:11.821139 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.821623 kubelet[2737]: E0715 23:58:11.821386 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.822023 kubelet[2737]: E0715 23:58:11.822005 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.822139 kubelet[2737]: W0715 23:58:11.822123 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.822321 kubelet[2737]: E0715 23:58:11.822281 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.822702 kubelet[2737]: E0715 23:58:11.822664 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.822702 kubelet[2737]: W0715 23:58:11.822681 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.822973 kubelet[2737]: E0715 23:58:11.822841 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.823328 kubelet[2737]: E0715 23:58:11.823289 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.823328 kubelet[2737]: W0715 23:58:11.823307 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.823786 kubelet[2737]: E0715 23:58:11.823487 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.824175 kubelet[2737]: E0715 23:58:11.824156 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.824310 kubelet[2737]: W0715 23:58:11.824281 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.824675 kubelet[2737]: E0715 23:58:11.824402 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.825121 kubelet[2737]: E0715 23:58:11.825101 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.825290 kubelet[2737]: W0715 23:58:11.825223 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.825290 kubelet[2737]: E0715 23:58:11.825248 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:11.835536 kubelet[2737]: E0715 23:58:11.835472 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:11.835536 kubelet[2737]: W0715 23:58:11.835488 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:11.835536 kubelet[2737]: E0715 23:58:11.835504 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:12.376283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678717690.mount: Deactivated successfully. Jul 15 23:58:13.253062 kubelet[2737]: E0715 23:58:13.253011 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h292n" podUID="dbc77e5e-b79b-4e32-be42-a7d09e567054" Jul 15 23:58:13.528848 containerd[1533]: time="2025-07-15T23:58:13.528192392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:13.532184 containerd[1533]: time="2025-07-15T23:58:13.532131514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 23:58:13.533944 containerd[1533]: time="2025-07-15T23:58:13.533824092Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:13.539345 containerd[1533]: time="2025-07-15T23:58:13.538388945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:13.543746 containerd[1533]: time="2025-07-15T23:58:13.542704438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.108665984s" Jul 15 23:58:13.543746 containerd[1533]: time="2025-07-15T23:58:13.542753383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 23:58:13.549025 containerd[1533]: time="2025-07-15T23:58:13.548947326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:58:13.571142 containerd[1533]: time="2025-07-15T23:58:13.571095848Z" level=info msg="CreateContainer within sandbox \"82904045ae2802836d761634a8f09104749673ceff667a61b452029c31aaba5b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:58:13.583920 containerd[1533]: time="2025-07-15T23:58:13.583644017Z" level=info msg="Container f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:13.599543 containerd[1533]: time="2025-07-15T23:58:13.599501637Z" level=info msg="CreateContainer within sandbox \"82904045ae2802836d761634a8f09104749673ceff667a61b452029c31aaba5b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb\"" Jul 15 23:58:13.601407 containerd[1533]: time="2025-07-15T23:58:13.601255798Z" level=info msg="StartContainer for \"f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb\"" Jul 15 23:58:13.603995 containerd[1533]: time="2025-07-15T23:58:13.603863699Z" level=info msg="connecting to shim f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb" address="unix:///run/containerd/s/ee32e2a3109c02a7b97469d4431dbb610351977e3bddf57da884037df4786780" protocol=ttrpc version=3 Jul 15 23:58:13.648283 systemd[1]: Started cri-containerd-f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb.scope - libcontainer container f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb. Jul 15 23:58:13.723487 containerd[1533]: time="2025-07-15T23:58:13.723428643Z" level=info msg="StartContainer for \"f9463401867b843e83ebc6bf885c2e59f6f2412cb7249092ae62564b89742cdb\" returns successfully" Jul 15 23:58:14.463708 kubelet[2737]: I0715 23:58:14.463323 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c88b878c9-fjqg7" podStartSLOduration=2.349035888 podStartE2EDuration="4.463301527s" podCreationTimestamp="2025-07-15 23:58:10 +0000 UTC" firstStartedPulling="2025-07-15 23:58:11.432789487 +0000 UTC m=+22.354436366" lastFinishedPulling="2025-07-15 23:58:13.547055113 +0000 UTC m=+24.468702005" observedRunningTime="2025-07-15 23:58:14.462925011 +0000 UTC m=+25.384571937" watchObservedRunningTime="2025-07-15 23:58:14.463301527 +0000 UTC m=+25.384948429" Jul 15 23:58:14.469208 containerd[1533]: time="2025-07-15T23:58:14.469161925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:14.471533 containerd[1533]: time="2025-07-15T23:58:14.471471568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 23:58:14.474965 containerd[1533]: time="2025-07-15T23:58:14.474928203Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:14.477240 containerd[1533]: time="2025-07-15T23:58:14.477202292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:14.479898 containerd[1533]: time="2025-07-15T23:58:14.479724422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 930.723936ms" Jul 15 23:58:14.480019 containerd[1533]: time="2025-07-15T23:58:14.479914445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 23:58:14.485699 containerd[1533]: time="2025-07-15T23:58:14.485227399Z" level=info msg="CreateContainer within sandbox \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:58:14.496208 containerd[1533]: time="2025-07-15T23:58:14.496177524Z" level=info msg="Container 592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:14.517456 containerd[1533]: time="2025-07-15T23:58:14.517400966Z" level=info msg="CreateContainer within sandbox \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\"" Jul 15 23:58:14.518160 containerd[1533]: time="2025-07-15T23:58:14.518103232Z" level=info msg="StartContainer for \"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\"" Jul 15 23:58:14.520925 containerd[1533]: time="2025-07-15T23:58:14.520845075Z" level=info msg="connecting to shim 592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6" address="unix:///run/containerd/s/d6964cb4372d4166673e06d2956a594ee0c16017f99d6c1e542212c47000cffb" protocol=ttrpc version=3 Jul 15 23:58:14.521892 kubelet[2737]: E0715 23:58:14.521828 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.522153 kubelet[2737]: W0715 23:58:14.522025 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.522153 kubelet[2737]: E0715 23:58:14.522062 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.523506 kubelet[2737]: E0715 23:58:14.523487 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.523778 kubelet[2737]: W0715 23:58:14.523652 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.523778 kubelet[2737]: E0715 23:58:14.523681 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.524689 kubelet[2737]: E0715 23:58:14.524634 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.525092 kubelet[2737]: W0715 23:58:14.524780 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.525092 kubelet[2737]: E0715 23:58:14.524808 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.526112 kubelet[2737]: E0715 23:58:14.526092 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.526112 kubelet[2737]: W0715 23:58:14.526158 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.526112 kubelet[2737]: E0715 23:58:14.526180 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.527006 kubelet[2737]: E0715 23:58:14.526961 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.527216 kubelet[2737]: W0715 23:58:14.527086 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.527216 kubelet[2737]: E0715 23:58:14.527110 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.527856 kubelet[2737]: E0715 23:58:14.527821 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.528214 kubelet[2737]: W0715 23:58:14.527919 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.528214 kubelet[2737]: E0715 23:58:14.527949 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.528824 kubelet[2737]: E0715 23:58:14.528775 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.529045 kubelet[2737]: W0715 23:58:14.528927 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.529388 kubelet[2737]: E0715 23:58:14.528964 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.529857 kubelet[2737]: E0715 23:58:14.529807 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.530229 kubelet[2737]: W0715 23:58:14.529942 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.530229 kubelet[2737]: E0715 23:58:14.529964 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.531040 kubelet[2737]: E0715 23:58:14.531025 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.531256 kubelet[2737]: W0715 23:58:14.531174 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.531256 kubelet[2737]: E0715 23:58:14.531196 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.531821 kubelet[2737]: E0715 23:58:14.531799 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.532128 kubelet[2737]: W0715 23:58:14.531862 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.532128 kubelet[2737]: E0715 23:58:14.532019 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.532779 kubelet[2737]: E0715 23:58:14.532738 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.533323 kubelet[2737]: W0715 23:58:14.532913 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.533601 kubelet[2737]: E0715 23:58:14.533513 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.534277 kubelet[2737]: E0715 23:58:14.534259 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.534458 kubelet[2737]: W0715 23:58:14.534383 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.534458 kubelet[2737]: E0715 23:58:14.534406 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.535040 kubelet[2737]: E0715 23:58:14.535001 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.535240 kubelet[2737]: W0715 23:58:14.535147 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.535240 kubelet[2737]: E0715 23:58:14.535171 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.535730 kubelet[2737]: E0715 23:58:14.535706 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.535935 kubelet[2737]: W0715 23:58:14.535838 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.536151 kubelet[2737]: E0715 23:58:14.536017 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.536794 kubelet[2737]: E0715 23:58:14.536713 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.536794 kubelet[2737]: W0715 23:58:14.536733 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.536794 kubelet[2737]: E0715 23:58:14.536753 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.538086 kubelet[2737]: E0715 23:58:14.537922 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.538086 kubelet[2737]: W0715 23:58:14.537944 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.538086 kubelet[2737]: E0715 23:58:14.537960 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.540318 kubelet[2737]: E0715 23:58:14.540203 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.540542 kubelet[2737]: W0715 23:58:14.540439 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.540542 kubelet[2737]: E0715 23:58:14.540468 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.541641 kubelet[2737]: E0715 23:58:14.541591 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.541850 kubelet[2737]: W0715 23:58:14.541731 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.542369 kubelet[2737]: E0715 23:58:14.542343 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.542596 kubelet[2737]: E0715 23:58:14.542443 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.543086 kubelet[2737]: W0715 23:58:14.542822 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.543086 kubelet[2737]: E0715 23:58:14.542954 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.543606 kubelet[2737]: E0715 23:58:14.543580 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.543707 kubelet[2737]: W0715 23:58:14.543640 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.543707 kubelet[2737]: E0715 23:58:14.543669 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.544524 kubelet[2737]: E0715 23:58:14.544498 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.544524 kubelet[2737]: W0715 23:58:14.544525 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.544743 kubelet[2737]: E0715 23:58:14.544594 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.545354 kubelet[2737]: E0715 23:58:14.545318 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.545630 kubelet[2737]: W0715 23:58:14.545587 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.546068 kubelet[2737]: E0715 23:58:14.546047 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.547945 kubelet[2737]: E0715 23:58:14.547842 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.547945 kubelet[2737]: W0715 23:58:14.547912 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.548392 kubelet[2737]: E0715 23:58:14.548147 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.548636 kubelet[2737]: E0715 23:58:14.548560 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.548636 kubelet[2737]: W0715 23:58:14.548575 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.549051 kubelet[2737]: E0715 23:58:14.548867 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.549474 kubelet[2737]: E0715 23:58:14.549451 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.549767 kubelet[2737]: W0715 23:58:14.549604 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.549767 kubelet[2737]: E0715 23:58:14.549688 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.550995 kubelet[2737]: E0715 23:58:14.550422 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.550995 kubelet[2737]: W0715 23:58:14.550441 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.550995 kubelet[2737]: E0715 23:58:14.550483 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.550995 kubelet[2737]: E0715 23:58:14.550777 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.550995 kubelet[2737]: W0715 23:58:14.550793 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.550995 kubelet[2737]: E0715 23:58:14.550820 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.551624 kubelet[2737]: E0715 23:58:14.551594 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.552296 kubelet[2737]: W0715 23:58:14.551906 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.552296 kubelet[2737]: E0715 23:58:14.551942 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.552555 kubelet[2737]: E0715 23:58:14.552436 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.552639 kubelet[2737]: W0715 23:58:14.552583 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.552639 kubelet[2737]: E0715 23:58:14.552614 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.553017 kubelet[2737]: E0715 23:58:14.552995 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.553017 kubelet[2737]: W0715 23:58:14.553019 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.553144 kubelet[2737]: E0715 23:58:14.553086 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.556961 kubelet[2737]: E0715 23:58:14.556907 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.556961 kubelet[2737]: W0715 23:58:14.556949 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.557117 kubelet[2737]: E0715 23:58:14.556988 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.557852 kubelet[2737]: E0715 23:58:14.557791 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.557852 kubelet[2737]: W0715 23:58:14.557814 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.557852 kubelet[2737]: E0715 23:58:14.557841 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.561827 kubelet[2737]: E0715 23:58:14.560338 2737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:58:14.561827 kubelet[2737]: W0715 23:58:14.560358 2737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:58:14.561827 kubelet[2737]: E0715 23:58:14.560376 2737 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:58:14.581088 systemd[1]: Started cri-containerd-592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6.scope - libcontainer container 592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6. Jul 15 23:58:14.646590 containerd[1533]: time="2025-07-15T23:58:14.646549845Z" level=info msg="StartContainer for \"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\" returns successfully" Jul 15 23:58:14.660375 systemd[1]: cri-containerd-592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6.scope: Deactivated successfully. Jul 15 23:58:14.664369 containerd[1533]: time="2025-07-15T23:58:14.664330939Z" level=info msg="received exit event container_id:\"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\" id:\"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\" pid:3418 exited_at:{seconds:1752623894 nanos:663844963}" Jul 15 23:58:14.664662 containerd[1533]: time="2025-07-15T23:58:14.664634362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\" id:\"592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6\" pid:3418 exited_at:{seconds:1752623894 nanos:663844963}" Jul 15 23:58:14.702081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-592ae88334be72d74b5046e0e6112e48eb3e1dafe8cb6c87818aa6607ef51ae6-rootfs.mount: Deactivated successfully. Jul 15 23:58:15.254078 kubelet[2737]: E0715 23:58:15.252537 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h292n" podUID="dbc77e5e-b79b-4e32-be42-a7d09e567054" Jul 15 23:58:15.448477 kubelet[2737]: I0715 23:58:15.447894 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:16.457232 containerd[1533]: time="2025-07-15T23:58:16.457100300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:58:17.254299 kubelet[2737]: E0715 23:58:17.254220 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h292n" podUID="dbc77e5e-b79b-4e32-be42-a7d09e567054" Jul 15 23:58:19.253260 kubelet[2737]: E0715 23:58:19.253198 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h292n" podUID="dbc77e5e-b79b-4e32-be42-a7d09e567054" Jul 15 23:58:19.473271 containerd[1533]: time="2025-07-15T23:58:19.473207206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.474497 containerd[1533]: time="2025-07-15T23:58:19.474268447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 23:58:19.475443 containerd[1533]: time="2025-07-15T23:58:19.475402124Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.478369 containerd[1533]: time="2025-07-15T23:58:19.478312367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:19.480038 containerd[1533]: time="2025-07-15T23:58:19.479991388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.022646062s" Jul 15 23:58:19.480231 containerd[1533]: time="2025-07-15T23:58:19.480165129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 23:58:19.484549 containerd[1533]: time="2025-07-15T23:58:19.484513269Z" level=info msg="CreateContainer within sandbox \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:58:19.505700 containerd[1533]: time="2025-07-15T23:58:19.504124010Z" level=info msg="Container 3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:19.518746 containerd[1533]: time="2025-07-15T23:58:19.518683664Z" level=info msg="CreateContainer within sandbox \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\"" Jul 15 23:58:19.519784 containerd[1533]: time="2025-07-15T23:58:19.519645311Z" level=info msg="StartContainer for \"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\"" Jul 15 23:58:19.521978 containerd[1533]: time="2025-07-15T23:58:19.521899762Z" level=info msg="connecting to shim 3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8" address="unix:///run/containerd/s/d6964cb4372d4166673e06d2956a594ee0c16017f99d6c1e542212c47000cffb" protocol=ttrpc version=3 Jul 15 23:58:19.557319 systemd[1]: Started cri-containerd-3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8.scope - libcontainer container 3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8. Jul 15 23:58:19.634899 containerd[1533]: time="2025-07-15T23:58:19.634571842Z" level=info msg="StartContainer for \"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\" returns successfully" Jul 15 23:58:20.625917 containerd[1533]: time="2025-07-15T23:58:20.625802202Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:58:20.628797 systemd[1]: cri-containerd-3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8.scope: Deactivated successfully. Jul 15 23:58:20.629778 systemd[1]: cri-containerd-3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8.scope: Consumed 631ms CPU time, 195.2M memory peak, 171.2M written to disk. Jul 15 23:58:20.631450 containerd[1533]: time="2025-07-15T23:58:20.631415823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\" id:\"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\" pid:3475 exited_at:{seconds:1752623900 nanos:630820949}" Jul 15 23:58:20.631564 containerd[1533]: time="2025-07-15T23:58:20.631420907Z" level=info msg="received exit event container_id:\"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\" id:\"3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8\" pid:3475 exited_at:{seconds:1752623900 nanos:630820949}" Jul 15 23:58:20.666539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c8f84a9ae49bded2c35bd4b2009b4ef28e1b89b50733c89b3cfedf5bb95b8d8-rootfs.mount: Deactivated successfully. Jul 15 23:58:20.685367 kubelet[2737]: I0715 23:58:20.685333 2737 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 23:58:20.738143 kubelet[2737]: W0715 23:58:20.738106 2737 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' and this object Jul 15 23:58:20.738433 kubelet[2737]: E0715 23:58:20.738365 2737 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' and this object" logger="UnhandledError" Jul 15 23:58:20.747390 systemd[1]: Created slice kubepods-burstable-pod94fb9565_86bc_449c_9245_58cf9522973c.slice - libcontainer container kubepods-burstable-pod94fb9565_86bc_449c_9245_58cf9522973c.slice. Jul 15 23:58:20.764783 systemd[1]: Created slice kubepods-burstable-podc4ca25e7_a503_4cb1_a3fa_434988d2e0d2.slice - libcontainer container kubepods-burstable-podc4ca25e7_a503_4cb1_a3fa_434988d2e0d2.slice. Jul 15 23:58:20.782381 systemd[1]: Created slice kubepods-besteffort-pod94239258_2fce_4481_ac75_6556fcb16427.slice - libcontainer container kubepods-besteffort-pod94239258_2fce_4481_ac75_6556fcb16427.slice. Jul 15 23:58:20.797271 kubelet[2737]: I0715 23:58:20.797218 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b690fef-7cac-4a33-931c-783015df90ea-config\") pod \"goldmane-58fd7646b9-q2l8r\" (UID: \"7b690fef-7cac-4a33-931c-783015df90ea\") " pod="calico-system/goldmane-58fd7646b9-q2l8r" Jul 15 23:58:20.797395 kubelet[2737]: I0715 23:58:20.797291 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7d86\" (UniqueName: \"kubernetes.io/projected/7b690fef-7cac-4a33-931c-783015df90ea-kube-api-access-f7d86\") pod \"goldmane-58fd7646b9-q2l8r\" (UID: \"7b690fef-7cac-4a33-931c-783015df90ea\") " pod="calico-system/goldmane-58fd7646b9-q2l8r" Jul 15 23:58:20.797395 kubelet[2737]: I0715 23:58:20.797358 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bcabbd91-30a5-46cc-bc3f-643aec47f561-calico-apiserver-certs\") pod \"calico-apiserver-74787ffd8f-n6dr5\" (UID: \"bcabbd91-30a5-46cc-bc3f-643aec47f561\") " pod="calico-apiserver/calico-apiserver-74787ffd8f-n6dr5" Jul 15 23:58:20.797518 kubelet[2737]: I0715 23:58:20.797396 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9nkt\" (UniqueName: \"kubernetes.io/projected/c4ca25e7-a503-4cb1-a3fa-434988d2e0d2-kube-api-access-h9nkt\") pod \"coredns-7c65d6cfc9-q6whk\" (UID: \"c4ca25e7-a503-4cb1-a3fa-434988d2e0d2\") " pod="kube-system/coredns-7c65d6cfc9-q6whk" Jul 15 23:58:20.797518 kubelet[2737]: I0715 23:58:20.797444 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4ca25e7-a503-4cb1-a3fa-434988d2e0d2-config-volume\") pod \"coredns-7c65d6cfc9-q6whk\" (UID: \"c4ca25e7-a503-4cb1-a3fa-434988d2e0d2\") " pod="kube-system/coredns-7c65d6cfc9-q6whk" Jul 15 23:58:20.797518 kubelet[2737]: I0715 23:58:20.797493 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7b690fef-7cac-4a33-931c-783015df90ea-goldmane-key-pair\") pod \"goldmane-58fd7646b9-q2l8r\" (UID: \"7b690fef-7cac-4a33-931c-783015df90ea\") " pod="calico-system/goldmane-58fd7646b9-q2l8r" Jul 15 23:58:20.797667 kubelet[2737]: I0715 23:58:20.797531 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94239258-2fce-4481-ac75-6556fcb16427-tigera-ca-bundle\") pod \"calico-kube-controllers-64b477f76d-65q9p\" (UID: \"94239258-2fce-4481-ac75-6556fcb16427\") " pod="calico-system/calico-kube-controllers-64b477f76d-65q9p" Jul 15 23:58:20.797667 kubelet[2737]: I0715 23:58:20.797575 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djfvx\" (UniqueName: \"kubernetes.io/projected/bcabbd91-30a5-46cc-bc3f-643aec47f561-kube-api-access-djfvx\") pod \"calico-apiserver-74787ffd8f-n6dr5\" (UID: \"bcabbd91-30a5-46cc-bc3f-643aec47f561\") " pod="calico-apiserver/calico-apiserver-74787ffd8f-n6dr5" Jul 15 23:58:20.797667 kubelet[2737]: I0715 23:58:20.797615 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94fb9565-86bc-449c-9245-58cf9522973c-config-volume\") pod \"coredns-7c65d6cfc9-6ftsn\" (UID: \"94fb9565-86bc-449c-9245-58cf9522973c\") " pod="kube-system/coredns-7c65d6cfc9-6ftsn" Jul 15 23:58:20.797821 kubelet[2737]: I0715 23:58:20.797685 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmzt\" (UniqueName: \"kubernetes.io/projected/94fb9565-86bc-449c-9245-58cf9522973c-kube-api-access-7qmzt\") pod \"coredns-7c65d6cfc9-6ftsn\" (UID: \"94fb9565-86bc-449c-9245-58cf9522973c\") " pod="kube-system/coredns-7c65d6cfc9-6ftsn" Jul 15 23:58:20.797821 kubelet[2737]: I0715 23:58:20.797723 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fk6m\" (UniqueName: \"kubernetes.io/projected/a2376508-2cb7-4b74-b5df-0657f2dbb569-kube-api-access-4fk6m\") pod \"whisker-5bbf779759-7nm4f\" (UID: \"a2376508-2cb7-4b74-b5df-0657f2dbb569\") " pod="calico-system/whisker-5bbf779759-7nm4f" Jul 15 23:58:20.797821 kubelet[2737]: I0715 23:58:20.797762 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13ab9fb8-5d89-46a3-931a-6a54b396abc5-calico-apiserver-certs\") pod \"calico-apiserver-5844bd86db-66x54\" (UID: \"13ab9fb8-5d89-46a3-931a-6a54b396abc5\") " pod="calico-apiserver/calico-apiserver-5844bd86db-66x54" Jul 15 23:58:20.797821 kubelet[2737]: I0715 23:58:20.797792 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6mf\" (UniqueName: \"kubernetes.io/projected/e7f09e50-9536-425d-816d-b82a0bd87ca6-kube-api-access-pg6mf\") pod \"calico-apiserver-5844bd86db-9djkg\" (UID: \"e7f09e50-9536-425d-816d-b82a0bd87ca6\") " pod="calico-apiserver/calico-apiserver-5844bd86db-9djkg" Jul 15 23:58:20.798085 kubelet[2737]: I0715 23:58:20.797832 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-backend-key-pair\") pod \"whisker-5bbf779759-7nm4f\" (UID: \"a2376508-2cb7-4b74-b5df-0657f2dbb569\") " pod="calico-system/whisker-5bbf779759-7nm4f" Jul 15 23:58:20.798085 kubelet[2737]: I0715 23:58:20.797871 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b690fef-7cac-4a33-931c-783015df90ea-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-q2l8r\" (UID: \"7b690fef-7cac-4a33-931c-783015df90ea\") " pod="calico-system/goldmane-58fd7646b9-q2l8r" Jul 15 23:58:20.798085 kubelet[2737]: I0715 23:58:20.797946 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrmv\" (UniqueName: \"kubernetes.io/projected/94239258-2fce-4481-ac75-6556fcb16427-kube-api-access-5rrmv\") pod \"calico-kube-controllers-64b477f76d-65q9p\" (UID: \"94239258-2fce-4481-ac75-6556fcb16427\") " pod="calico-system/calico-kube-controllers-64b477f76d-65q9p" Jul 15 23:58:20.798085 kubelet[2737]: I0715 23:58:20.798057 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn67x\" (UniqueName: \"kubernetes.io/projected/13ab9fb8-5d89-46a3-931a-6a54b396abc5-kube-api-access-fn67x\") pod \"calico-apiserver-5844bd86db-66x54\" (UID: \"13ab9fb8-5d89-46a3-931a-6a54b396abc5\") " pod="calico-apiserver/calico-apiserver-5844bd86db-66x54" Jul 15 23:58:20.798279 kubelet[2737]: I0715 23:58:20.798097 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-ca-bundle\") pod \"whisker-5bbf779759-7nm4f\" (UID: \"a2376508-2cb7-4b74-b5df-0657f2dbb569\") " pod="calico-system/whisker-5bbf779759-7nm4f" Jul 15 23:58:20.798279 kubelet[2737]: I0715 23:58:20.798130 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7f09e50-9536-425d-816d-b82a0bd87ca6-calico-apiserver-certs\") pod \"calico-apiserver-5844bd86db-9djkg\" (UID: \"e7f09e50-9536-425d-816d-b82a0bd87ca6\") " pod="calico-apiserver/calico-apiserver-5844bd86db-9djkg" Jul 15 23:58:20.820795 systemd[1]: Created slice kubepods-besteffort-pod13ab9fb8_5d89_46a3_931a_6a54b396abc5.slice - libcontainer container kubepods-besteffort-pod13ab9fb8_5d89_46a3_931a_6a54b396abc5.slice. Jul 15 23:58:20.834215 systemd[1]: Created slice kubepods-besteffort-pode7f09e50_9536_425d_816d_b82a0bd87ca6.slice - libcontainer container kubepods-besteffort-pode7f09e50_9536_425d_816d_b82a0bd87ca6.slice. Jul 15 23:58:20.851286 systemd[1]: Created slice kubepods-besteffort-poda2376508_2cb7_4b74_b5df_0657f2dbb569.slice - libcontainer container kubepods-besteffort-poda2376508_2cb7_4b74_b5df_0657f2dbb569.slice. Jul 15 23:58:20.866808 systemd[1]: Created slice kubepods-besteffort-podbcabbd91_30a5_46cc_bc3f_643aec47f561.slice - libcontainer container kubepods-besteffort-podbcabbd91_30a5_46cc_bc3f_643aec47f561.slice. Jul 15 23:58:20.877395 systemd[1]: Created slice kubepods-besteffort-pod7b690fef_7cac_4a33_931c_783015df90ea.slice - libcontainer container kubepods-besteffort-pod7b690fef_7cac_4a33_931c_783015df90ea.slice. Jul 15 23:58:21.210940 containerd[1533]: time="2025-07-15T23:58:21.210744876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b477f76d-65q9p,Uid:94239258-2fce-4481-ac75-6556fcb16427,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:21.215808 containerd[1533]: time="2025-07-15T23:58:21.215754455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74787ffd8f-n6dr5,Uid:bcabbd91-30a5-46cc-bc3f-643aec47f561,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:21.228720 containerd[1533]: time="2025-07-15T23:58:21.228651293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q2l8r,Uid:7b690fef-7cac-4a33-931c-783015df90ea,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:21.237907 containerd[1533]: time="2025-07-15T23:58:21.237823622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bbf779759-7nm4f,Uid:a2376508-2cb7-4b74-b5df-0657f2dbb569,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:21.241897 containerd[1533]: time="2025-07-15T23:58:21.241710889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-66x54,Uid:13ab9fb8-5d89-46a3-931a-6a54b396abc5,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:21.242492 containerd[1533]: time="2025-07-15T23:58:21.242288402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-9djkg,Uid:e7f09e50-9536-425d-816d-b82a0bd87ca6,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:21.267810 systemd[1]: Created slice kubepods-besteffort-poddbc77e5e_b79b_4e32_be42_a7d09e567054.slice - libcontainer container kubepods-besteffort-poddbc77e5e_b79b_4e32_be42_a7d09e567054.slice. Jul 15 23:58:21.273671 containerd[1533]: time="2025-07-15T23:58:21.273631515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h292n,Uid:dbc77e5e-b79b-4e32-be42-a7d09e567054,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:21.490971 containerd[1533]: time="2025-07-15T23:58:21.490003496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:58:21.572704 containerd[1533]: time="2025-07-15T23:58:21.572548447Z" level=error msg="Failed to destroy network for sandbox \"ba3568c9ee2296deac84288933ccac58033d4838d17e95ebf4f9c559f4da841b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.581811 containerd[1533]: time="2025-07-15T23:58:21.581728194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q2l8r,Uid:7b690fef-7cac-4a33-931c-783015df90ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3568c9ee2296deac84288933ccac58033d4838d17e95ebf4f9c559f4da841b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.583548 kubelet[2737]: E0715 23:58:21.582079 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3568c9ee2296deac84288933ccac58033d4838d17e95ebf4f9c559f4da841b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.583548 kubelet[2737]: E0715 23:58:21.582173 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3568c9ee2296deac84288933ccac58033d4838d17e95ebf4f9c559f4da841b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-q2l8r" Jul 15 23:58:21.583548 kubelet[2737]: E0715 23:58:21.582207 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3568c9ee2296deac84288933ccac58033d4838d17e95ebf4f9c559f4da841b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-q2l8r" Jul 15 23:58:21.583788 kubelet[2737]: E0715 23:58:21.582284 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-q2l8r_calico-system(7b690fef-7cac-4a33-931c-783015df90ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-q2l8r_calico-system(7b690fef-7cac-4a33-931c-783015df90ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba3568c9ee2296deac84288933ccac58033d4838d17e95ebf4f9c559f4da841b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-q2l8r" podUID="7b690fef-7cac-4a33-931c-783015df90ea" Jul 15 23:58:21.595130 containerd[1533]: time="2025-07-15T23:58:21.595069838Z" level=error msg="Failed to destroy network for sandbox \"08c952b07c3284bd52b05d97e243f810aeeb2653509f16c8e6a2ae5bf145486b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.601657 containerd[1533]: time="2025-07-15T23:58:21.601555879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b477f76d-65q9p,Uid:94239258-2fce-4481-ac75-6556fcb16427,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c952b07c3284bd52b05d97e243f810aeeb2653509f16c8e6a2ae5bf145486b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.602649 kubelet[2737]: E0715 23:58:21.601797 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c952b07c3284bd52b05d97e243f810aeeb2653509f16c8e6a2ae5bf145486b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.602649 kubelet[2737]: E0715 23:58:21.601860 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c952b07c3284bd52b05d97e243f810aeeb2653509f16c8e6a2ae5bf145486b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b477f76d-65q9p" Jul 15 23:58:21.602649 kubelet[2737]: E0715 23:58:21.601907 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08c952b07c3284bd52b05d97e243f810aeeb2653509f16c8e6a2ae5bf145486b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b477f76d-65q9p" Jul 15 23:58:21.604743 kubelet[2737]: E0715 23:58:21.602055 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64b477f76d-65q9p_calico-system(94239258-2fce-4481-ac75-6556fcb16427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64b477f76d-65q9p_calico-system(94239258-2fce-4481-ac75-6556fcb16427)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08c952b07c3284bd52b05d97e243f810aeeb2653509f16c8e6a2ae5bf145486b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64b477f76d-65q9p" podUID="94239258-2fce-4481-ac75-6556fcb16427" Jul 15 23:58:21.624394 containerd[1533]: time="2025-07-15T23:58:21.624339287Z" level=error msg="Failed to destroy network for sandbox \"9196afc90e9e2ebf41a7684f9df6980fd1772b4da5e43a294c1179b11dd0a4c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.629404 containerd[1533]: time="2025-07-15T23:58:21.628903369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74787ffd8f-n6dr5,Uid:bcabbd91-30a5-46cc-bc3f-643aec47f561,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9196afc90e9e2ebf41a7684f9df6980fd1772b4da5e43a294c1179b11dd0a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.630453 kubelet[2737]: E0715 23:58:21.629239 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9196afc90e9e2ebf41a7684f9df6980fd1772b4da5e43a294c1179b11dd0a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.630453 kubelet[2737]: E0715 23:58:21.629309 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9196afc90e9e2ebf41a7684f9df6980fd1772b4da5e43a294c1179b11dd0a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74787ffd8f-n6dr5" Jul 15 23:58:21.630453 kubelet[2737]: E0715 23:58:21.629340 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9196afc90e9e2ebf41a7684f9df6980fd1772b4da5e43a294c1179b11dd0a4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74787ffd8f-n6dr5" Jul 15 23:58:21.630654 kubelet[2737]: E0715 23:58:21.629405 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74787ffd8f-n6dr5_calico-apiserver(bcabbd91-30a5-46cc-bc3f-643aec47f561)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74787ffd8f-n6dr5_calico-apiserver(bcabbd91-30a5-46cc-bc3f-643aec47f561)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9196afc90e9e2ebf41a7684f9df6980fd1772b4da5e43a294c1179b11dd0a4c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74787ffd8f-n6dr5" podUID="bcabbd91-30a5-46cc-bc3f-643aec47f561" Jul 15 23:58:21.656226 containerd[1533]: time="2025-07-15T23:58:21.655843763Z" level=error msg="Failed to destroy network for sandbox \"6ae85ad0d57aa90ccb927736a42a92a9b26a5d8e8e7e0886cd2f381660057b10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.657075 containerd[1533]: time="2025-07-15T23:58:21.657029092Z" level=error msg="Failed to destroy network for sandbox \"f2a6f0cf5729a9e9b51683b609835512c934b9f09c8b4867b406c3aa541f739b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.657428 containerd[1533]: time="2025-07-15T23:58:21.657388361Z" level=error msg="Failed to destroy network for sandbox \"54f0d75be70e853d7c84eaa5815c4fc83024f0e3149bf1db631e63e934299145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.657860 containerd[1533]: time="2025-07-15T23:58:21.657669560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-66x54,Uid:13ab9fb8-5d89-46a3-931a-6a54b396abc5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae85ad0d57aa90ccb927736a42a92a9b26a5d8e8e7e0886cd2f381660057b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.658271 kubelet[2737]: E0715 23:58:21.658195 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae85ad0d57aa90ccb927736a42a92a9b26a5d8e8e7e0886cd2f381660057b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.658378 kubelet[2737]: E0715 23:58:21.658292 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae85ad0d57aa90ccb927736a42a92a9b26a5d8e8e7e0886cd2f381660057b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5844bd86db-66x54" Jul 15 23:58:21.658378 kubelet[2737]: E0715 23:58:21.658324 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae85ad0d57aa90ccb927736a42a92a9b26a5d8e8e7e0886cd2f381660057b10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5844bd86db-66x54" Jul 15 23:58:21.658487 kubelet[2737]: E0715 23:58:21.658387 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5844bd86db-66x54_calico-apiserver(13ab9fb8-5d89-46a3-931a-6a54b396abc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5844bd86db-66x54_calico-apiserver(13ab9fb8-5d89-46a3-931a-6a54b396abc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ae85ad0d57aa90ccb927736a42a92a9b26a5d8e8e7e0886cd2f381660057b10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5844bd86db-66x54" podUID="13ab9fb8-5d89-46a3-931a-6a54b396abc5" Jul 15 23:58:21.661939 containerd[1533]: time="2025-07-15T23:58:21.660142075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bbf779759-7nm4f,Uid:a2376508-2cb7-4b74-b5df-0657f2dbb569,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a6f0cf5729a9e9b51683b609835512c934b9f09c8b4867b406c3aa541f739b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.664058 kubelet[2737]: E0715 23:58:21.661178 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a6f0cf5729a9e9b51683b609835512c934b9f09c8b4867b406c3aa541f739b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.664058 kubelet[2737]: E0715 23:58:21.661271 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a6f0cf5729a9e9b51683b609835512c934b9f09c8b4867b406c3aa541f739b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bbf779759-7nm4f" Jul 15 23:58:21.664058 kubelet[2737]: E0715 23:58:21.661303 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a6f0cf5729a9e9b51683b609835512c934b9f09c8b4867b406c3aa541f739b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bbf779759-7nm4f" Jul 15 23:58:21.664517 containerd[1533]: time="2025-07-15T23:58:21.662961957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-9djkg,Uid:e7f09e50-9536-425d-816d-b82a0bd87ca6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f0d75be70e853d7c84eaa5815c4fc83024f0e3149bf1db631e63e934299145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.667158 kubelet[2737]: E0715 23:58:21.661708 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bbf779759-7nm4f_calico-system(a2376508-2cb7-4b74-b5df-0657f2dbb569)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bbf779759-7nm4f_calico-system(a2376508-2cb7-4b74-b5df-0657f2dbb569)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2a6f0cf5729a9e9b51683b609835512c934b9f09c8b4867b406c3aa541f739b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bbf779759-7nm4f" podUID="a2376508-2cb7-4b74-b5df-0657f2dbb569" Jul 15 23:58:21.667158 kubelet[2737]: E0715 23:58:21.663995 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f0d75be70e853d7c84eaa5815c4fc83024f0e3149bf1db631e63e934299145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.667158 kubelet[2737]: E0715 23:58:21.664270 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f0d75be70e853d7c84eaa5815c4fc83024f0e3149bf1db631e63e934299145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5844bd86db-9djkg" Jul 15 23:58:21.667497 kubelet[2737]: E0715 23:58:21.664299 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f0d75be70e853d7c84eaa5815c4fc83024f0e3149bf1db631e63e934299145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5844bd86db-9djkg" Jul 15 23:58:21.667497 kubelet[2737]: E0715 23:58:21.664353 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5844bd86db-9djkg_calico-apiserver(e7f09e50-9536-425d-816d-b82a0bd87ca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5844bd86db-9djkg_calico-apiserver(e7f09e50-9536-425d-816d-b82a0bd87ca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54f0d75be70e853d7c84eaa5815c4fc83024f0e3149bf1db631e63e934299145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5844bd86db-9djkg" podUID="e7f09e50-9536-425d-816d-b82a0bd87ca6" Jul 15 23:58:21.681894 containerd[1533]: time="2025-07-15T23:58:21.680019787Z" level=error msg="Failed to destroy network for sandbox \"6eb4c869fa14b5d699799e6eb3a614cc5f7fdd84414d0c1daf158e08c4620a87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.684896 containerd[1533]: time="2025-07-15T23:58:21.683494125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h292n,Uid:dbc77e5e-b79b-4e32-be42-a7d09e567054,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4c869fa14b5d699799e6eb3a614cc5f7fdd84414d0c1daf158e08c4620a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.685379 kubelet[2737]: E0715 23:58:21.685323 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4c869fa14b5d699799e6eb3a614cc5f7fdd84414d0c1daf158e08c4620a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:21.685813 kubelet[2737]: E0715 23:58:21.685390 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4c869fa14b5d699799e6eb3a614cc5f7fdd84414d0c1daf158e08c4620a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:21.685813 kubelet[2737]: E0715 23:58:21.685418 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4c869fa14b5d699799e6eb3a614cc5f7fdd84414d0c1daf158e08c4620a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h292n" Jul 15 23:58:21.685813 kubelet[2737]: E0715 23:58:21.685468 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h292n_calico-system(dbc77e5e-b79b-4e32-be42-a7d09e567054)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h292n_calico-system(dbc77e5e-b79b-4e32-be42-a7d09e567054)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eb4c869fa14b5d699799e6eb3a614cc5f7fdd84414d0c1daf158e08c4620a87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h292n" podUID="dbc77e5e-b79b-4e32-be42-a7d09e567054" Jul 15 23:58:21.698156 systemd[1]: run-netns-cni\x2d3d41e82a\x2d1eb4\x2d8789\x2da0f4\x2d02f9105c3dea.mount: Deactivated successfully. Jul 15 23:58:21.962523 containerd[1533]: time="2025-07-15T23:58:21.962140776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6ftsn,Uid:94fb9565-86bc-449c-9245-58cf9522973c,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:21.978337 containerd[1533]: time="2025-07-15T23:58:21.978254499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q6whk,Uid:c4ca25e7-a503-4cb1-a3fa-434988d2e0d2,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:22.078017 containerd[1533]: time="2025-07-15T23:58:22.077956872Z" level=error msg="Failed to destroy network for sandbox \"7017d6d7555b2770346061859b750ae1eaa356a71aa2cdad50db65dbdce2ae1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:22.086464 containerd[1533]: time="2025-07-15T23:58:22.086015444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6ftsn,Uid:94fb9565-86bc-449c-9245-58cf9522973c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7017d6d7555b2770346061859b750ae1eaa356a71aa2cdad50db65dbdce2ae1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:22.086453 systemd[1]: run-netns-cni\x2d9bbc11b1\x2d0533\x2dd754\x2d7415\x2ddafbb4f179ef.mount: Deactivated successfully. Jul 15 23:58:22.088248 kubelet[2737]: E0715 23:58:22.087947 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7017d6d7555b2770346061859b750ae1eaa356a71aa2cdad50db65dbdce2ae1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:22.088248 kubelet[2737]: E0715 23:58:22.088051 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7017d6d7555b2770346061859b750ae1eaa356a71aa2cdad50db65dbdce2ae1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6ftsn" Jul 15 23:58:22.088248 kubelet[2737]: E0715 23:58:22.088099 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7017d6d7555b2770346061859b750ae1eaa356a71aa2cdad50db65dbdce2ae1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6ftsn" Jul 15 23:58:22.089137 kubelet[2737]: E0715 23:58:22.088168 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6ftsn_kube-system(94fb9565-86bc-449c-9245-58cf9522973c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6ftsn_kube-system(94fb9565-86bc-449c-9245-58cf9522973c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7017d6d7555b2770346061859b750ae1eaa356a71aa2cdad50db65dbdce2ae1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6ftsn" podUID="94fb9565-86bc-449c-9245-58cf9522973c" Jul 15 23:58:22.104369 containerd[1533]: time="2025-07-15T23:58:22.104308310Z" level=error msg="Failed to destroy network for sandbox \"fb3fa9dfd04e06df6adcf34c9aa7e4a00ecefab58a4da54670d38d70bd12a740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:22.106222 containerd[1533]: time="2025-07-15T23:58:22.105998560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q6whk,Uid:c4ca25e7-a503-4cb1-a3fa-434988d2e0d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3fa9dfd04e06df6adcf34c9aa7e4a00ecefab58a4da54670d38d70bd12a740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:22.106487 kubelet[2737]: E0715 23:58:22.106425 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3fa9dfd04e06df6adcf34c9aa7e4a00ecefab58a4da54670d38d70bd12a740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:58:22.106571 kubelet[2737]: E0715 23:58:22.106492 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3fa9dfd04e06df6adcf34c9aa7e4a00ecefab58a4da54670d38d70bd12a740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q6whk" Jul 15 23:58:22.106571 kubelet[2737]: E0715 23:58:22.106535 2737 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3fa9dfd04e06df6adcf34c9aa7e4a00ecefab58a4da54670d38d70bd12a740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q6whk" Jul 15 23:58:22.106740 kubelet[2737]: E0715 23:58:22.106624 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q6whk_kube-system(c4ca25e7-a503-4cb1-a3fa-434988d2e0d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q6whk_kube-system(c4ca25e7-a503-4cb1-a3fa-434988d2e0d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb3fa9dfd04e06df6adcf34c9aa7e4a00ecefab58a4da54670d38d70bd12a740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q6whk" podUID="c4ca25e7-a503-4cb1-a3fa-434988d2e0d2" Jul 15 23:58:22.668791 systemd[1]: run-netns-cni\x2db86dc730\x2dbf87\x2d7211\x2d0afe\x2d52886b3aa0b9.mount: Deactivated successfully. Jul 15 23:58:25.760981 kubelet[2737]: I0715 23:58:25.760903 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:28.108221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526620675.mount: Deactivated successfully. Jul 15 23:58:28.144892 containerd[1533]: time="2025-07-15T23:58:28.144823759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:28.146137 containerd[1533]: time="2025-07-15T23:58:28.146093983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 23:58:28.147435 containerd[1533]: time="2025-07-15T23:58:28.147393944Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:28.154906 containerd[1533]: time="2025-07-15T23:58:28.153911744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:28.154906 containerd[1533]: time="2025-07-15T23:58:28.154705360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.663575301s" Jul 15 23:58:28.154906 containerd[1533]: time="2025-07-15T23:58:28.154740946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 23:58:28.165784 containerd[1533]: time="2025-07-15T23:58:28.165746069Z" level=info msg="CreateContainer within sandbox \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:58:28.189124 containerd[1533]: time="2025-07-15T23:58:28.189085759Z" level=info msg="Container 3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:28.202014 containerd[1533]: time="2025-07-15T23:58:28.201966734Z" level=info msg="CreateContainer within sandbox \"4d6c031182cc0e61bab85dbcd6ffd0624587e6273202c6be34a6fb9ed2bcf10a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\"" Jul 15 23:58:28.202905 containerd[1533]: time="2025-07-15T23:58:28.202793320Z" level=info msg="StartContainer for \"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\"" Jul 15 23:58:28.204981 containerd[1533]: time="2025-07-15T23:58:28.204920985Z" level=info msg="connecting to shim 3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa" address="unix:///run/containerd/s/d6964cb4372d4166673e06d2956a594ee0c16017f99d6c1e542212c47000cffb" protocol=ttrpc version=3 Jul 15 23:58:28.234105 systemd[1]: Started cri-containerd-3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa.scope - libcontainer container 3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa. Jul 15 23:58:28.303521 containerd[1533]: time="2025-07-15T23:58:28.303412104Z" level=info msg="StartContainer for \"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\" returns successfully" Jul 15 23:58:28.423671 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:58:28.423981 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:58:28.553519 kubelet[2737]: I0715 23:58:28.553309 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9w7x7" podStartSLOduration=1.212772931 podStartE2EDuration="17.553147052s" podCreationTimestamp="2025-07-15 23:58:11 +0000 UTC" firstStartedPulling="2025-07-15 23:58:11.815584701 +0000 UTC m=+22.737231582" lastFinishedPulling="2025-07-15 23:58:28.155958807 +0000 UTC m=+39.077605703" observedRunningTime="2025-07-15 23:58:28.548702665 +0000 UTC m=+39.470349566" watchObservedRunningTime="2025-07-15 23:58:28.553147052 +0000 UTC m=+39.474793954" Jul 15 23:58:28.673982 kubelet[2737]: I0715 23:58:28.672439 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fk6m\" (UniqueName: \"kubernetes.io/projected/a2376508-2cb7-4b74-b5df-0657f2dbb569-kube-api-access-4fk6m\") pod \"a2376508-2cb7-4b74-b5df-0657f2dbb569\" (UID: \"a2376508-2cb7-4b74-b5df-0657f2dbb569\") " Jul 15 23:58:28.673982 kubelet[2737]: I0715 23:58:28.672498 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-ca-bundle\") pod \"a2376508-2cb7-4b74-b5df-0657f2dbb569\" (UID: \"a2376508-2cb7-4b74-b5df-0657f2dbb569\") " Jul 15 23:58:28.673982 kubelet[2737]: I0715 23:58:28.672540 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-backend-key-pair\") pod \"a2376508-2cb7-4b74-b5df-0657f2dbb569\" (UID: \"a2376508-2cb7-4b74-b5df-0657f2dbb569\") " Jul 15 23:58:28.677633 kubelet[2737]: I0715 23:58:28.677581 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a2376508-2cb7-4b74-b5df-0657f2dbb569" (UID: "a2376508-2cb7-4b74-b5df-0657f2dbb569"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 23:58:28.682986 kubelet[2737]: I0715 23:58:28.682913 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a2376508-2cb7-4b74-b5df-0657f2dbb569" (UID: "a2376508-2cb7-4b74-b5df-0657f2dbb569"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 23:58:28.684093 kubelet[2737]: I0715 23:58:28.684056 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2376508-2cb7-4b74-b5df-0657f2dbb569-kube-api-access-4fk6m" (OuterVolumeSpecName: "kube-api-access-4fk6m") pod "a2376508-2cb7-4b74-b5df-0657f2dbb569" (UID: "a2376508-2cb7-4b74-b5df-0657f2dbb569"). InnerVolumeSpecName "kube-api-access-4fk6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:58:28.773702 kubelet[2737]: I0715 23:58:28.773648 2737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fk6m\" (UniqueName: \"kubernetes.io/projected/a2376508-2cb7-4b74-b5df-0657f2dbb569-kube-api-access-4fk6m\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:58:28.773702 kubelet[2737]: I0715 23:58:28.773701 2737 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-ca-bundle\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:58:28.774100 kubelet[2737]: I0715 23:58:28.773718 2737 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2376508-2cb7-4b74-b5df-0657f2dbb569-whisker-backend-key-pair\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:58:28.872911 containerd[1533]: time="2025-07-15T23:58:28.872681303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\" id:\"1c21eab09e263846ac2d92d7659235d25daf042c9bfa5378624b2f5e96df1dff\" pid:3829 exit_status:1 exited_at:{seconds:1752623908 nanos:872231159}" Jul 15 23:58:29.108737 systemd[1]: var-lib-kubelet-pods-a2376508\x2d2cb7\x2d4b74\x2db5df\x2d0657f2dbb569-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4fk6m.mount: Deactivated successfully. Jul 15 23:58:29.109161 systemd[1]: var-lib-kubelet-pods-a2376508\x2d2cb7\x2d4b74\x2db5df\x2d0657f2dbb569-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:58:29.261462 systemd[1]: Removed slice kubepods-besteffort-poda2376508_2cb7_4b74_b5df_0657f2dbb569.slice - libcontainer container kubepods-besteffort-poda2376508_2cb7_4b74_b5df_0657f2dbb569.slice. Jul 15 23:58:29.621369 systemd[1]: Created slice kubepods-besteffort-pod6ec9a3ac_0fca_4cf7_b5cc_bb9b71b5e9b4.slice - libcontainer container kubepods-besteffort-pod6ec9a3ac_0fca_4cf7_b5cc_bb9b71b5e9b4.slice. Jul 15 23:58:29.681000 kubelet[2737]: I0715 23:58:29.680952 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z2q\" (UniqueName: \"kubernetes.io/projected/6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4-kube-api-access-q8z2q\") pod \"whisker-7bd575cf97-w5rgt\" (UID: \"6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4\") " pod="calico-system/whisker-7bd575cf97-w5rgt" Jul 15 23:58:29.681000 kubelet[2737]: I0715 23:58:29.681005 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4-whisker-backend-key-pair\") pod \"whisker-7bd575cf97-w5rgt\" (UID: \"6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4\") " pod="calico-system/whisker-7bd575cf97-w5rgt" Jul 15 23:58:29.682368 kubelet[2737]: I0715 23:58:29.681043 2737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4-whisker-ca-bundle\") pod \"whisker-7bd575cf97-w5rgt\" (UID: \"6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4\") " pod="calico-system/whisker-7bd575cf97-w5rgt" Jul 15 23:58:29.718992 containerd[1533]: time="2025-07-15T23:58:29.718934247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\" id:\"26a7849e4e156f4a7f27531a6c4fa0760c553423b556b3b4ba3e870c7307090a\" pid:3873 exit_status:1 exited_at:{seconds:1752623909 nanos:718243503}" Jul 15 23:58:29.931349 containerd[1533]: time="2025-07-15T23:58:29.931207189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd575cf97-w5rgt,Uid:6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:30.077933 systemd-networkd[1440]: cali25e3abc9277: Link UP Jul 15 23:58:30.078266 systemd-networkd[1440]: cali25e3abc9277: Gained carrier Jul 15 23:58:30.111644 containerd[1533]: 2025-07-15 23:58:29.969 [INFO][3888] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:58:30.111644 containerd[1533]: 2025-07-15 23:58:29.985 [INFO][3888] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0 whisker-7bd575cf97- calico-system 6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4 893 0 2025-07-15 23:58:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bd575cf97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce whisker-7bd575cf97-w5rgt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali25e3abc9277 [] [] }} ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-" Jul 15 23:58:30.111644 containerd[1533]: 2025-07-15 23:58:29.985 [INFO][3888] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.111644 containerd[1533]: 2025-07-15 23:58:30.019 [INFO][3901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" HandleID="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.020 [INFO][3901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" HandleID="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"whisker-7bd575cf97-w5rgt", "timestamp":"2025-07-15 23:58:30.019793888 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.020 [INFO][3901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.020 [INFO][3901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.020 [INFO][3901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.031 [INFO][3901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.037 [INFO][3901] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.041 [INFO][3901] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113091 containerd[1533]: 2025-07-15 23:58:30.044 [INFO][3901] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.046 [INFO][3901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.046 [INFO][3901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.048 [INFO][3901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.054 [INFO][3901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.061 [INFO][3901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.1/26] block=192.168.49.0/26 handle="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.061 [INFO][3901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.1/26] handle="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.061 [INFO][3901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:30.113510 containerd[1533]: 2025-07-15 23:58:30.061 [INFO][3901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.1/26] IPv6=[] ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" HandleID="k8s-pod-network.eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.114071 containerd[1533]: 2025-07-15 23:58:30.065 [INFO][3888] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0", GenerateName:"whisker-7bd575cf97-", Namespace:"calico-system", SelfLink:"", UID:"6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd575cf97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"whisker-7bd575cf97-w5rgt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25e3abc9277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:30.114197 containerd[1533]: 2025-07-15 23:58:30.065 [INFO][3888] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.1/32] ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.114197 containerd[1533]: 2025-07-15 23:58:30.065 [INFO][3888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25e3abc9277 ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.114197 containerd[1533]: 2025-07-15 23:58:30.077 [INFO][3888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.114349 containerd[1533]: 2025-07-15 23:58:30.077 [INFO][3888] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0", GenerateName:"whisker-7bd575cf97-", Namespace:"calico-system", SelfLink:"", UID:"6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd575cf97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b", Pod:"whisker-7bd575cf97-w5rgt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25e3abc9277", MAC:"2a:7a:ac:a9:22:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:30.115180 containerd[1533]: 2025-07-15 23:58:30.102 [INFO][3888] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" Namespace="calico-system" Pod="whisker-7bd575cf97-w5rgt" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-whisker--7bd575cf97--w5rgt-eth0" Jul 15 23:58:30.165177 containerd[1533]: time="2025-07-15T23:58:30.165119307Z" level=info msg="connecting to shim eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b" address="unix:///run/containerd/s/07abf4b6c5a9732beaf3d06155fe840d782c5d4a054bbe7ec734670f2ec557a5" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:30.234565 systemd[1]: Started cri-containerd-eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b.scope - libcontainer container eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b. Jul 15 23:58:30.380122 containerd[1533]: time="2025-07-15T23:58:30.380065781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd575cf97-w5rgt,Uid:6ec9a3ac-0fca-4cf7-b5cc-bb9b71b5e9b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b\"" Jul 15 23:58:30.383761 containerd[1533]: time="2025-07-15T23:58:30.383695049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:58:31.123275 systemd-networkd[1440]: vxlan.calico: Link UP Jul 15 23:58:31.123292 systemd-networkd[1440]: vxlan.calico: Gained carrier Jul 15 23:58:31.262409 kubelet[2737]: I0715 23:58:31.261858 2737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2376508-2cb7-4b74-b5df-0657f2dbb569" path="/var/lib/kubelet/pods/a2376508-2cb7-4b74-b5df-0657f2dbb569/volumes" Jul 15 23:58:31.422910 containerd[1533]: time="2025-07-15T23:58:31.422428727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:31.424448 containerd[1533]: time="2025-07-15T23:58:31.424384627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 23:58:31.426151 containerd[1533]: time="2025-07-15T23:58:31.426091186Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:31.432257 containerd[1533]: time="2025-07-15T23:58:31.432219596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:31.433782 containerd[1533]: time="2025-07-15T23:58:31.433705893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.049775992s" Jul 15 23:58:31.433782 containerd[1533]: time="2025-07-15T23:58:31.433752189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 23:58:31.440605 containerd[1533]: time="2025-07-15T23:58:31.440566286Z" level=info msg="CreateContainer within sandbox \"eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:58:31.448830 containerd[1533]: time="2025-07-15T23:58:31.448788899Z" level=info msg="Container febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:31.464102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948245180.mount: Deactivated successfully. Jul 15 23:58:31.468987 containerd[1533]: time="2025-07-15T23:58:31.468935827Z" level=info msg="CreateContainer within sandbox \"eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2\"" Jul 15 23:58:31.469811 containerd[1533]: time="2025-07-15T23:58:31.469779938Z" level=info msg="StartContainer for \"febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2\"" Jul 15 23:58:31.471504 containerd[1533]: time="2025-07-15T23:58:31.471449222Z" level=info msg="connecting to shim febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2" address="unix:///run/containerd/s/07abf4b6c5a9732beaf3d06155fe840d782c5d4a054bbe7ec734670f2ec557a5" protocol=ttrpc version=3 Jul 15 23:58:31.513126 systemd[1]: Started cri-containerd-febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2.scope - libcontainer container febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2. Jul 15 23:58:31.622262 containerd[1533]: time="2025-07-15T23:58:31.622208529Z" level=info msg="StartContainer for \"febaccf24de7d4094aa4c19e8f1e5b3ffdbe7e0706424d81585643376ba580e2\" returns successfully" Jul 15 23:58:31.623773 containerd[1533]: time="2025-07-15T23:58:31.623738813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:58:31.862248 systemd-networkd[1440]: cali25e3abc9277: Gained IPv6LL Jul 15 23:58:32.439963 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Jul 15 23:58:33.254533 containerd[1533]: time="2025-07-15T23:58:33.254476969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-66x54,Uid:13ab9fb8-5d89-46a3-931a-6a54b396abc5,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:33.258099 containerd[1533]: time="2025-07-15T23:58:33.257254181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6ftsn,Uid:94fb9565-86bc-449c-9245-58cf9522973c,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:33.258099 containerd[1533]: time="2025-07-15T23:58:33.257414378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q2l8r,Uid:7b690fef-7cac-4a33-931c-783015df90ea,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:33.258099 containerd[1533]: time="2025-07-15T23:58:33.257443453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h292n,Uid:dbc77e5e-b79b-4e32-be42-a7d09e567054,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:33.645644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848346227.mount: Deactivated successfully. Jul 15 23:58:33.685900 containerd[1533]: time="2025-07-15T23:58:33.685641871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:33.687836 containerd[1533]: time="2025-07-15T23:58:33.687799291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 23:58:33.690003 containerd[1533]: time="2025-07-15T23:58:33.689967017Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:33.695988 containerd[1533]: time="2025-07-15T23:58:33.695914425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:33.701058 containerd[1533]: time="2025-07-15T23:58:33.701025559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.077095669s" Jul 15 23:58:33.701286 containerd[1533]: time="2025-07-15T23:58:33.701226542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 23:58:33.709012 systemd-networkd[1440]: cali1967065c9e9: Link UP Jul 15 23:58:33.712239 systemd-networkd[1440]: cali1967065c9e9: Gained carrier Jul 15 23:58:33.713028 containerd[1533]: time="2025-07-15T23:58:33.712629999Z" level=info msg="CreateContainer within sandbox \"eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:58:33.727199 containerd[1533]: time="2025-07-15T23:58:33.725495098Z" level=info msg="Container 6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:33.742474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151911482.mount: Deactivated successfully. Jul 15 23:58:33.753935 containerd[1533]: 2025-07-15 23:58:33.455 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0 coredns-7c65d6cfc9- kube-system 94fb9565-86bc-449c-9245-58cf9522973c 808 0 2025-07-15 23:57:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce coredns-7c65d6cfc9-6ftsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1967065c9e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-" Jul 15 23:58:33.753935 containerd[1533]: 2025-07-15 23:58:33.457 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.753935 containerd[1533]: 2025-07-15 23:58:33.610 [INFO][4245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" HandleID="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.611 [INFO][4245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" HandleID="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001034b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"coredns-7c65d6cfc9-6ftsn", "timestamp":"2025-07-15 23:58:33.609644022 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.611 [INFO][4245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.612 [INFO][4245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.612 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.629 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.645 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.657 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754372 containerd[1533]: 2025-07-15 23:58:33.660 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.663 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.664 [INFO][4245] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.666 [INFO][4245] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3 Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.682 [INFO][4245] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.693 [INFO][4245] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.2/26] block=192.168.49.0/26 handle="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.693 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.2/26] handle="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.693 [INFO][4245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:33.754752 containerd[1533]: 2025-07-15 23:58:33.693 [INFO][4245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.2/26] IPv6=[] ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" HandleID="k8s-pod-network.ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.755157 containerd[1533]: 2025-07-15 23:58:33.697 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"94fb9565-86bc-449c-9245-58cf9522973c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"coredns-7c65d6cfc9-6ftsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1967065c9e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:33.755157 containerd[1533]: 2025-07-15 23:58:33.698 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.2/32] ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.755157 containerd[1533]: 2025-07-15 23:58:33.698 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1967065c9e9 ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.755157 containerd[1533]: 2025-07-15 23:58:33.709 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.755157 containerd[1533]: 2025-07-15 23:58:33.710 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"94fb9565-86bc-449c-9245-58cf9522973c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3", Pod:"coredns-7c65d6cfc9-6ftsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1967065c9e9", MAC:"ba:21:1e:e8:60:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:33.755157 containerd[1533]: 2025-07-15 23:58:33.744 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6ftsn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--6ftsn-eth0" Jul 15 23:58:33.774569 containerd[1533]: time="2025-07-15T23:58:33.774281131Z" level=info msg="CreateContainer within sandbox \"eac2570c100d115267b90c7b8b63fd26fdf59c7f9a643816542d747e34efd28b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a\"" Jul 15 23:58:33.778943 containerd[1533]: time="2025-07-15T23:58:33.778633074Z" level=info msg="StartContainer for \"6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a\"" Jul 15 23:58:33.793671 containerd[1533]: time="2025-07-15T23:58:33.793620637Z" level=info msg="connecting to shim 6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a" address="unix:///run/containerd/s/07abf4b6c5a9732beaf3d06155fe840d782c5d4a054bbe7ec734670f2ec557a5" protocol=ttrpc version=3 Jul 15 23:58:33.853378 systemd-networkd[1440]: cali3c0bb10aa8b: Link UP Jul 15 23:58:33.857369 systemd-networkd[1440]: cali3c0bb10aa8b: Gained carrier Jul 15 23:58:33.894231 containerd[1533]: time="2025-07-15T23:58:33.894087118Z" level=info msg="connecting to shim ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3" address="unix:///run/containerd/s/2821482c86eae121881de3293fc705c9cc7f12fa3106111e2ee221c5643707b2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:33.901227 systemd[1]: Started cri-containerd-6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a.scope - libcontainer container 6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a. Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.492 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0 calico-apiserver-5844bd86db- calico-apiserver 13ab9fb8-5d89-46a3-931a-6a54b396abc5 815 0 2025-07-15 23:58:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5844bd86db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce calico-apiserver-5844bd86db-66x54 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c0bb10aa8b [] [] }} ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.493 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.658 [INFO][4250] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.659 [INFO][4250] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"calico-apiserver-5844bd86db-66x54", "timestamp":"2025-07-15 23:58:33.658969325 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.659 [INFO][4250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.693 [INFO][4250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.693 [INFO][4250] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.745 [INFO][4250] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.760 [INFO][4250] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.770 [INFO][4250] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.774 [INFO][4250] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.778 [INFO][4250] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.779 [INFO][4250] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.783 [INFO][4250] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.792 [INFO][4250] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.814 [INFO][4250] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.3/26] block=192.168.49.0/26 handle="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.814 [INFO][4250] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.3/26] handle="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.814 [INFO][4250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:33.917369 containerd[1533]: 2025-07-15 23:58:33.815 [INFO][4250] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.3/26] IPv6=[] ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.919428 containerd[1533]: 2025-07-15 23:58:33.827 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0", GenerateName:"calico-apiserver-5844bd86db-", Namespace:"calico-apiserver", SelfLink:"", UID:"13ab9fb8-5d89-46a3-931a-6a54b396abc5", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5844bd86db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"calico-apiserver-5844bd86db-66x54", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c0bb10aa8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:33.919428 containerd[1533]: 2025-07-15 23:58:33.831 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.3/32] ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.919428 containerd[1533]: 2025-07-15 23:58:33.841 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c0bb10aa8b ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.919428 containerd[1533]: 2025-07-15 23:58:33.867 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.919428 containerd[1533]: 2025-07-15 23:58:33.869 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0", GenerateName:"calico-apiserver-5844bd86db-", Namespace:"calico-apiserver", SelfLink:"", UID:"13ab9fb8-5d89-46a3-931a-6a54b396abc5", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5844bd86db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce", Pod:"calico-apiserver-5844bd86db-66x54", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c0bb10aa8b", MAC:"d2:b9:e8:f6:a7:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:33.919428 containerd[1533]: 2025-07-15 23:58:33.908 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-66x54" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:58:33.959214 systemd[1]: Started cri-containerd-ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3.scope - libcontainer container ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3. Jul 15 23:58:34.018054 systemd-networkd[1440]: cali4ec107fc8a6: Link UP Jul 15 23:58:34.023119 systemd-networkd[1440]: cali4ec107fc8a6: Gained carrier Jul 15 23:58:34.055114 containerd[1533]: time="2025-07-15T23:58:34.054747604Z" level=info msg="connecting to shim b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" address="unix:///run/containerd/s/868856893080b9eb670c1b06c2a49f7c148fe546522957bdee44b51ef9bd05e9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.504 [INFO][4207] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0 goldmane-58fd7646b9- calico-system 7b690fef-7cac-4a33-931c-783015df90ea 817 0 2025-07-15 23:58:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce goldmane-58fd7646b9-q2l8r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4ec107fc8a6 [] [] }} ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.505 [INFO][4207] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.656 [INFO][4252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" HandleID="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.657 [INFO][4252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" HandleID="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"goldmane-58fd7646b9-q2l8r", "timestamp":"2025-07-15 23:58:33.656863601 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.658 [INFO][4252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.815 [INFO][4252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.816 [INFO][4252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.871 [INFO][4252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.897 [INFO][4252] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.919 [INFO][4252] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.925 [INFO][4252] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.931 [INFO][4252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.932 [INFO][4252] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.935 [INFO][4252] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839 Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.947 [INFO][4252] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.976 [INFO][4252] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.4/26] block=192.168.49.0/26 handle="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.976 [INFO][4252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.4/26] handle="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.978 [INFO][4252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:34.085120 containerd[1533]: 2025-07-15 23:58:33.979 [INFO][4252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.4/26] IPv6=[] ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" HandleID="k8s-pod-network.8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.088520 containerd[1533]: 2025-07-15 23:58:33.997 [INFO][4207] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7b690fef-7cac-4a33-931c-783015df90ea", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"goldmane-58fd7646b9-q2l8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ec107fc8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:34.088520 containerd[1533]: 2025-07-15 23:58:34.000 [INFO][4207] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.4/32] ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.088520 containerd[1533]: 2025-07-15 23:58:34.001 [INFO][4207] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ec107fc8a6 ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.088520 containerd[1533]: 2025-07-15 23:58:34.028 [INFO][4207] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.088520 containerd[1533]: 2025-07-15 23:58:34.033 [INFO][4207] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7b690fef-7cac-4a33-931c-783015df90ea", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839", Pod:"goldmane-58fd7646b9-q2l8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4ec107fc8a6", MAC:"0a:4b:5e:7b:4b:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:34.088520 containerd[1533]: 2025-07-15 23:58:34.078 [INFO][4207] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" Namespace="calico-system" Pod="goldmane-58fd7646b9-q2l8r" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-goldmane--58fd7646b9--q2l8r-eth0" Jul 15 23:58:34.136108 systemd[1]: Started cri-containerd-b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce.scope - libcontainer container b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce. Jul 15 23:58:34.153100 systemd-networkd[1440]: calid250f2a9505: Link UP Jul 15 23:58:34.158953 systemd-networkd[1440]: calid250f2a9505: Gained carrier Jul 15 23:58:34.178906 containerd[1533]: time="2025-07-15T23:58:34.177505453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6ftsn,Uid:94fb9565-86bc-449c-9245-58cf9522973c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3\"" Jul 15 23:58:34.201899 containerd[1533]: time="2025-07-15T23:58:34.201797085Z" level=info msg="CreateContainer within sandbox \"ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.505 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0 csi-node-driver- calico-system dbc77e5e-b79b-4e32-be42-a7d09e567054 707 0 2025-07-15 23:58:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce csi-node-driver-h292n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid250f2a9505 [] [] }} ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.505 [INFO][4211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.684 [INFO][4255] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" HandleID="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.686 [INFO][4255] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" HandleID="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038fd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"csi-node-driver-h292n", "timestamp":"2025-07-15 23:58:33.684800259 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.687 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.979 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:33.979 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.007 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.026 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.052 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.062 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.070 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.070 [INFO][4255] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.073 [INFO][4255] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.089 [INFO][4255] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.115 [INFO][4255] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.5/26] block=192.168.49.0/26 handle="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.116 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.5/26] handle="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.116 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:34.203994 containerd[1533]: 2025-07-15 23:58:34.116 [INFO][4255] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.5/26] IPv6=[] ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" HandleID="k8s-pod-network.8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.204990 containerd[1533]: 2025-07-15 23:58:34.144 [INFO][4211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbc77e5e-b79b-4e32-be42-a7d09e567054", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"csi-node-driver-h292n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid250f2a9505", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:34.204990 containerd[1533]: 2025-07-15 23:58:34.147 [INFO][4211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.5/32] ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.204990 containerd[1533]: 2025-07-15 23:58:34.147 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid250f2a9505 ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.204990 containerd[1533]: 2025-07-15 23:58:34.152 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.204990 containerd[1533]: 2025-07-15 23:58:34.157 [INFO][4211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbc77e5e-b79b-4e32-be42-a7d09e567054", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b", Pod:"csi-node-driver-h292n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid250f2a9505", MAC:"52:ce:f2:18:8c:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:34.204990 containerd[1533]: 2025-07-15 23:58:34.185 [INFO][4211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" Namespace="calico-system" Pod="csi-node-driver-h292n" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-csi--node--driver--h292n-eth0" Jul 15 23:58:34.217610 containerd[1533]: time="2025-07-15T23:58:34.217574312Z" level=info msg="Container cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:34.228902 containerd[1533]: time="2025-07-15T23:58:34.227704461Z" level=info msg="connecting to shim 8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839" address="unix:///run/containerd/s/77279aa98ffec10c3b31a1c1b4a5536ccd6c26fb42b211eea2625c0e4fc89b43" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:34.242925 containerd[1533]: time="2025-07-15T23:58:34.241809582Z" level=info msg="CreateContainer within sandbox \"ff8c5b5f6dfcfcd593597c5797d549b5206b3af0e6d60315adb0596afb02dff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3\"" Jul 15 23:58:34.246505 containerd[1533]: time="2025-07-15T23:58:34.246465571Z" level=info msg="StartContainer for \"cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3\"" Jul 15 23:58:34.247691 containerd[1533]: time="2025-07-15T23:58:34.247644625Z" level=info msg="connecting to shim cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3" address="unix:///run/containerd/s/2821482c86eae121881de3293fc705c9cc7f12fa3106111e2ee221c5643707b2" protocol=ttrpc version=3 Jul 15 23:58:34.255501 containerd[1533]: time="2025-07-15T23:58:34.255453859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b477f76d-65q9p,Uid:94239258-2fce-4481-ac75-6556fcb16427,Namespace:calico-system,Attempt:0,}" Jul 15 23:58:34.256354 containerd[1533]: time="2025-07-15T23:58:34.256311223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-9djkg,Uid:e7f09e50-9536-425d-816d-b82a0bd87ca6,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:34.318283 systemd[1]: Started cri-containerd-8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839.scope - libcontainer container 8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839. Jul 15 23:58:34.364447 systemd[1]: Started cri-containerd-cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3.scope - libcontainer container cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3. Jul 15 23:58:34.418244 containerd[1533]: time="2025-07-15T23:58:34.417779527Z" level=info msg="connecting to shim 8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b" address="unix:///run/containerd/s/587e28db19b346b5bbd39a59486128b6b74f0599fc7ae4bc2d19d87c272f67af" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:34.447587 containerd[1533]: time="2025-07-15T23:58:34.447537049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-66x54,Uid:13ab9fb8-5d89-46a3-931a-6a54b396abc5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\"" Jul 15 23:58:34.453199 containerd[1533]: time="2025-07-15T23:58:34.453159860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:58:34.520272 containerd[1533]: time="2025-07-15T23:58:34.519151313Z" level=info msg="StartContainer for \"6f2c034e4317fb98d8532be641c7c753c11de0d7011bcc838c770a3bae5c707a\" returns successfully" Jul 15 23:58:34.527395 systemd[1]: Started cri-containerd-8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b.scope - libcontainer container 8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b. Jul 15 23:58:34.612110 containerd[1533]: time="2025-07-15T23:58:34.612058517Z" level=info msg="StartContainer for \"cdad6e3dabe8af6188ad4a106b36dd6cc0677caaf7965ed3a49a6657d43789b3\" returns successfully" Jul 15 23:58:34.616065 kubelet[2737]: I0715 23:58:34.615981 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bd575cf97-w5rgt" podStartSLOduration=2.295610239 podStartE2EDuration="5.615957836s" podCreationTimestamp="2025-07-15 23:58:29 +0000 UTC" firstStartedPulling="2025-07-15 23:58:30.383258572 +0000 UTC m=+41.304905462" lastFinishedPulling="2025-07-15 23:58:33.703606142 +0000 UTC m=+44.625253059" observedRunningTime="2025-07-15 23:58:34.614118716 +0000 UTC m=+45.535765618" watchObservedRunningTime="2025-07-15 23:58:34.615957836 +0000 UTC m=+45.537604739" Jul 15 23:58:34.637576 containerd[1533]: time="2025-07-15T23:58:34.637461408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-q2l8r,Uid:7b690fef-7cac-4a33-931c-783015df90ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839\"" Jul 15 23:58:34.853041 containerd[1533]: time="2025-07-15T23:58:34.852973828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h292n,Uid:dbc77e5e-b79b-4e32-be42-a7d09e567054,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b\"" Jul 15 23:58:34.913004 systemd-networkd[1440]: calic7df43f4535: Link UP Jul 15 23:58:34.913374 systemd-networkd[1440]: calic7df43f4535: Gained carrier Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.674 [INFO][4442] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0 calico-kube-controllers-64b477f76d- calico-system 94239258-2fce-4481-ac75-6556fcb16427 821 0 2025-07-15 23:58:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64b477f76d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce calico-kube-controllers-64b477f76d-65q9p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7df43f4535 [] [] }} ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.675 [INFO][4442] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.826 [INFO][4573] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" HandleID="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.827 [INFO][4573] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" HandleID="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"calico-kube-controllers-64b477f76d-65q9p", "timestamp":"2025-07-15 23:58:34.826725277 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.828 [INFO][4573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.828 [INFO][4573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.828 [INFO][4573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.844 [INFO][4573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.862 [INFO][4573] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.873 [INFO][4573] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.880 [INFO][4573] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.884 [INFO][4573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.884 [INFO][4573] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.886 [INFO][4573] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.891 [INFO][4573] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.900 [INFO][4573] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.6/26] block=192.168.49.0/26 handle="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.901 [INFO][4573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.6/26] handle="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.901 [INFO][4573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:34.941217 containerd[1533]: 2025-07-15 23:58:34.901 [INFO][4573] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.6/26] IPv6=[] ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" HandleID="k8s-pod-network.f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.942701 containerd[1533]: 2025-07-15 23:58:34.905 [INFO][4442] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0", GenerateName:"calico-kube-controllers-64b477f76d-", Namespace:"calico-system", SelfLink:"", UID:"94239258-2fce-4481-ac75-6556fcb16427", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b477f76d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"calico-kube-controllers-64b477f76d-65q9p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7df43f4535", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:34.942701 containerd[1533]: 2025-07-15 23:58:34.906 [INFO][4442] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.6/32] ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.942701 containerd[1533]: 2025-07-15 23:58:34.906 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7df43f4535 ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.942701 containerd[1533]: 2025-07-15 23:58:34.910 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.942701 containerd[1533]: 2025-07-15 23:58:34.911 [INFO][4442] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0", GenerateName:"calico-kube-controllers-64b477f76d-", Namespace:"calico-system", SelfLink:"", UID:"94239258-2fce-4481-ac75-6556fcb16427", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b477f76d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e", Pod:"calico-kube-controllers-64b477f76d-65q9p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7df43f4535", MAC:"12:4b:43:44:29:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:34.942701 containerd[1533]: 2025-07-15 23:58:34.926 [INFO][4442] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" Namespace="calico-system" Pod="calico-kube-controllers-64b477f76d-65q9p" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--kube--controllers--64b477f76d--65q9p-eth0" Jul 15 23:58:34.993116 containerd[1533]: time="2025-07-15T23:58:34.993017148Z" level=info msg="connecting to shim f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e" address="unix:///run/containerd/s/25405190b62be58ce906d7e9da53f544e755bd36d69e88f4ea078c6b3eaeaea3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:35.078646 systemd[1]: Started cri-containerd-f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e.scope - libcontainer container f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e. Jul 15 23:58:35.092144 systemd-networkd[1440]: calie129728f4ab: Link UP Jul 15 23:58:35.095003 systemd-networkd[1440]: calie129728f4ab: Gained carrier Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.683 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0 calico-apiserver-5844bd86db- calico-apiserver e7f09e50-9536-425d-816d-b82a0bd87ca6 818 0 2025-07-15 23:58:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5844bd86db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce calico-apiserver-5844bd86db-9djkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie129728f4ab [] [] }} ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.683 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.868 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.874 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"calico-apiserver-5844bd86db-9djkg", "timestamp":"2025-07-15 23:58:34.868823659 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.875 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.901 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.901 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.948 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.962 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.993 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:34.999 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.010 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.010 [INFO][4578] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.022 [INFO][4578] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6 Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.051 [INFO][4578] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.073 [INFO][4578] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.7/26] block=192.168.49.0/26 handle="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.073 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.7/26] handle="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.073 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:35.121155 containerd[1533]: 2025-07-15 23:58:35.073 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.7/26] IPv6=[] ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.122273 containerd[1533]: 2025-07-15 23:58:35.081 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0", GenerateName:"calico-apiserver-5844bd86db-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7f09e50-9536-425d-816d-b82a0bd87ca6", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5844bd86db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"calico-apiserver-5844bd86db-9djkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie129728f4ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:35.122273 containerd[1533]: 2025-07-15 23:58:35.081 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.7/32] ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.122273 containerd[1533]: 2025-07-15 23:58:35.081 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie129728f4ab ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.122273 containerd[1533]: 2025-07-15 23:58:35.095 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.122273 containerd[1533]: 2025-07-15 23:58:35.098 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0", GenerateName:"calico-apiserver-5844bd86db-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7f09e50-9536-425d-816d-b82a0bd87ca6", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5844bd86db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6", Pod:"calico-apiserver-5844bd86db-9djkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie129728f4ab", MAC:"92:13:cb:bd:b4:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:35.122273 containerd[1533]: 2025-07-15 23:58:35.113 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Namespace="calico-apiserver" Pod="calico-apiserver-5844bd86db-9djkg" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:58:35.176008 containerd[1533]: time="2025-07-15T23:58:35.175841918Z" level=info msg="connecting to shim 96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" address="unix:///run/containerd/s/afc325730df4d565fa3949333021a593109371785137d0451882e89aff1d9ae0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:35.190104 systemd-networkd[1440]: calid250f2a9505: Gained IPv6LL Jul 15 23:58:35.190504 systemd-networkd[1440]: cali1967065c9e9: Gained IPv6LL Jul 15 23:58:35.190772 systemd-networkd[1440]: cali4ec107fc8a6: Gained IPv6LL Jul 15 23:58:35.252294 containerd[1533]: time="2025-07-15T23:58:35.252133545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b477f76d-65q9p,Uid:94239258-2fce-4481-ac75-6556fcb16427,Namespace:calico-system,Attempt:0,} returns sandbox id \"f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e\"" Jul 15 23:58:35.253417 containerd[1533]: time="2025-07-15T23:58:35.253355467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74787ffd8f-n6dr5,Uid:bcabbd91-30a5-46cc-bc3f-643aec47f561,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:58:35.255361 systemd[1]: Started cri-containerd-96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6.scope - libcontainer container 96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6. Jul 15 23:58:35.382166 systemd-networkd[1440]: cali3c0bb10aa8b: Gained IPv6LL Jul 15 23:58:35.408590 containerd[1533]: time="2025-07-15T23:58:35.408439362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5844bd86db-9djkg,Uid:e7f09e50-9536-425d-816d-b82a0bd87ca6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\"" Jul 15 23:58:35.475079 systemd-networkd[1440]: cali468d375460c: Link UP Jul 15 23:58:35.476362 systemd-networkd[1440]: cali468d375460c: Gained carrier Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.324 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0 calico-apiserver-74787ffd8f- calico-apiserver bcabbd91-30a5-46cc-bc3f-643aec47f561 820 0 2025-07-15 23:58:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74787ffd8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce calico-apiserver-74787ffd8f-n6dr5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali468d375460c [] [] }} ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.326 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.401 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" HandleID="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.402 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" HandleID="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"calico-apiserver-74787ffd8f-n6dr5", "timestamp":"2025-07-15 23:58:35.401856785 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.403 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.403 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.403 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.416 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.426 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.433 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.436 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.441 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.441 [INFO][4716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.443 [INFO][4716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8 Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.450 [INFO][4716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.461 [INFO][4716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.8/26] block=192.168.49.0/26 handle="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.462 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.8/26] handle="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.462 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:35.503743 containerd[1533]: 2025-07-15 23:58:35.463 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.8/26] IPv6=[] ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" HandleID="k8s-pod-network.6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.505517 containerd[1533]: 2025-07-15 23:58:35.467 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0", GenerateName:"calico-apiserver-74787ffd8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcabbd91-30a5-46cc-bc3f-643aec47f561", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74787ffd8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"calico-apiserver-74787ffd8f-n6dr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468d375460c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:35.505517 containerd[1533]: 2025-07-15 23:58:35.467 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.8/32] ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.505517 containerd[1533]: 2025-07-15 23:58:35.467 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468d375460c ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.505517 containerd[1533]: 2025-07-15 23:58:35.476 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.505517 containerd[1533]: 2025-07-15 23:58:35.477 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0", GenerateName:"calico-apiserver-74787ffd8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcabbd91-30a5-46cc-bc3f-643aec47f561", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 58, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74787ffd8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8", Pod:"calico-apiserver-74787ffd8f-n6dr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468d375460c", MAC:"1a:b2:ce:8e:3e:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:35.505517 containerd[1533]: 2025-07-15 23:58:35.494 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" Namespace="calico-apiserver" Pod="calico-apiserver-74787ffd8f-n6dr5" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--74787ffd8f--n6dr5-eth0" Jul 15 23:58:35.563406 containerd[1533]: time="2025-07-15T23:58:35.563348230Z" level=info msg="connecting to shim 6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8" address="unix:///run/containerd/s/04580e2064eb769f29814b3f5f9e5b95bd15fa5e79d569a49b46326820e7ca0f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:35.660002 kubelet[2737]: I0715 23:58:35.657747 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6ftsn" podStartSLOduration=41.657722369 podStartE2EDuration="41.657722369s" podCreationTimestamp="2025-07-15 23:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:35.623722938 +0000 UTC m=+46.545369842" watchObservedRunningTime="2025-07-15 23:58:35.657722369 +0000 UTC m=+46.579369271" Jul 15 23:58:35.678222 systemd[1]: Started cri-containerd-6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8.scope - libcontainer container 6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8. Jul 15 23:58:35.816148 containerd[1533]: time="2025-07-15T23:58:35.816105226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74787ffd8f-n6dr5,Uid:bcabbd91-30a5-46cc-bc3f-643aec47f561,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8\"" Jul 15 23:58:36.253197 containerd[1533]: time="2025-07-15T23:58:36.253151589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q6whk,Uid:c4ca25e7-a503-4cb1-a3fa-434988d2e0d2,Namespace:kube-system,Attempt:0,}" Jul 15 23:58:36.342209 systemd-networkd[1440]: calie129728f4ab: Gained IPv6LL Jul 15 23:58:36.343166 systemd-networkd[1440]: calic7df43f4535: Gained IPv6LL Jul 15 23:58:36.473193 systemd-networkd[1440]: cali7c02870beea: Link UP Jul 15 23:58:36.474203 systemd-networkd[1440]: cali7c02870beea: Gained carrier Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.333 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0 coredns-7c65d6cfc9- kube-system c4ca25e7-a503-4cb1-a3fa-434988d2e0d2 816 0 2025-07-15 23:57:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce coredns-7c65d6cfc9-q6whk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c02870beea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.333 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.392 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" HandleID="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.394 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" HandleID="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", "pod":"coredns-7c65d6cfc9-q6whk", "timestamp":"2025-07-15 23:58:36.392678188 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.394 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.394 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.394 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce' Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.411 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.419 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.427 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.431 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.435 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.436 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.439 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64 Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.446 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.459 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.49.9/26] block=192.168.49.0/26 handle="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.459 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.9/26] handle="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" host="ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce" Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.459 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:58:36.503725 containerd[1533]: 2025-07-15 23:58:36.459 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.9/26] IPv6=[] ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" HandleID="k8s-pod-network.03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.505825 containerd[1533]: 2025-07-15 23:58:36.467 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c4ca25e7-a503-4cb1-a3fa-434988d2e0d2", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"", Pod:"coredns-7c65d6cfc9-q6whk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c02870beea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:36.505825 containerd[1533]: 2025-07-15 23:58:36.468 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.9/32] ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.505825 containerd[1533]: 2025-07-15 23:58:36.468 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c02870beea ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.505825 containerd[1533]: 2025-07-15 23:58:36.472 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.505825 containerd[1533]: 2025-07-15 23:58:36.475 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c4ca25e7-a503-4cb1-a3fa-434988d2e0d2", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce", ContainerID:"03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64", Pod:"coredns-7c65d6cfc9-q6whk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c02870beea", MAC:"4a:2a:62:82:e8:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:58:36.505825 containerd[1533]: 2025-07-15 23:58:36.499 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q6whk" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-coredns--7c65d6cfc9--q6whk-eth0" Jul 15 23:58:36.580125 containerd[1533]: time="2025-07-15T23:58:36.580075685Z" level=info msg="connecting to shim 03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64" address="unix:///run/containerd/s/c4df10eda6d7b5d847ab136e911161936d7f95e87c338fcfb319e77bb12ed1d2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:58:36.665087 systemd[1]: Started cri-containerd-03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64.scope - libcontainer container 03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64. Jul 15 23:58:36.856832 containerd[1533]: time="2025-07-15T23:58:36.856495236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q6whk,Uid:c4ca25e7-a503-4cb1-a3fa-434988d2e0d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64\"" Jul 15 23:58:36.869372 containerd[1533]: time="2025-07-15T23:58:36.869325949Z" level=info msg="CreateContainer within sandbox \"03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:58:36.897216 containerd[1533]: time="2025-07-15T23:58:36.896983760Z" level=info msg="Container 38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:36.918502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798578846.mount: Deactivated successfully. Jul 15 23:58:36.924194 containerd[1533]: time="2025-07-15T23:58:36.924152985Z" level=info msg="CreateContainer within sandbox \"03ea386a4de4e0e3d7caf0d72a2fd98310a85953c20be9932bf57f9325611d64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e\"" Jul 15 23:58:36.925987 containerd[1533]: time="2025-07-15T23:58:36.925810489Z" level=info msg="StartContainer for \"38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e\"" Jul 15 23:58:36.927615 containerd[1533]: time="2025-07-15T23:58:36.927111678Z" level=info msg="connecting to shim 38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e" address="unix:///run/containerd/s/c4df10eda6d7b5d847ab136e911161936d7f95e87c338fcfb319e77bb12ed1d2" protocol=ttrpc version=3 Jul 15 23:58:36.973086 systemd[1]: Started cri-containerd-38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e.scope - libcontainer container 38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e. Jul 15 23:58:37.046078 systemd-networkd[1440]: cali468d375460c: Gained IPv6LL Jul 15 23:58:37.049009 containerd[1533]: time="2025-07-15T23:58:37.048966875Z" level=info msg="StartContainer for \"38e1b8847639a6f6ceadd8309c598b37150a9e3d077fa3773e6fe2941bc4987e\" returns successfully" Jul 15 23:58:37.558189 systemd-networkd[1440]: cali7c02870beea: Gained IPv6LL Jul 15 23:58:37.668913 kubelet[2737]: I0715 23:58:37.663836 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q6whk" podStartSLOduration=43.6638096 podStartE2EDuration="43.6638096s" podCreationTimestamp="2025-07-15 23:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:58:37.662986323 +0000 UTC m=+48.584633226" watchObservedRunningTime="2025-07-15 23:58:37.6638096 +0000 UTC m=+48.585456502" Jul 15 23:58:37.842947 containerd[1533]: time="2025-07-15T23:58:37.840595887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:37.845867 containerd[1533]: time="2025-07-15T23:58:37.843083864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 23:58:37.849895 containerd[1533]: time="2025-07-15T23:58:37.849035405Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:37.857824 containerd[1533]: time="2025-07-15T23:58:37.857785851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:37.859949 containerd[1533]: time="2025-07-15T23:58:37.859871320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.406663821s" Jul 15 23:58:37.860134 containerd[1533]: time="2025-07-15T23:58:37.860108923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:58:37.861660 containerd[1533]: time="2025-07-15T23:58:37.861311591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:58:37.864036 containerd[1533]: time="2025-07-15T23:58:37.864004899Z" level=info msg="CreateContainer within sandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:58:37.878191 containerd[1533]: time="2025-07-15T23:58:37.878142281Z" level=info msg="Container a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:37.895601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount640402023.mount: Deactivated successfully. Jul 15 23:58:37.906035 containerd[1533]: time="2025-07-15T23:58:37.905968265Z" level=info msg="CreateContainer within sandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\"" Jul 15 23:58:37.907014 containerd[1533]: time="2025-07-15T23:58:37.906983285Z" level=info msg="StartContainer for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\"" Jul 15 23:58:37.908703 containerd[1533]: time="2025-07-15T23:58:37.908669846Z" level=info msg="connecting to shim a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06" address="unix:///run/containerd/s/868856893080b9eb670c1b06c2a49f7c148fe546522957bdee44b51ef9bd05e9" protocol=ttrpc version=3 Jul 15 23:58:37.953118 systemd[1]: Started cri-containerd-a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06.scope - libcontainer container a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06. Jul 15 23:58:38.075001 containerd[1533]: time="2025-07-15T23:58:38.074903675Z" level=info msg="StartContainer for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" returns successfully" Jul 15 23:58:39.566405 ntpd[1489]: Listen normally on 8 vxlan.calico 192.168.49.0:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 8 vxlan.calico 192.168.49.0:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 9 cali25e3abc9277 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 10 vxlan.calico [fe80::64d0:82ff:fe63:5ab8%5]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 11 cali1967065c9e9 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 12 cali3c0bb10aa8b [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 13 cali4ec107fc8a6 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 14 calid250f2a9505 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 15 calic7df43f4535 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 16 calie129728f4ab [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 17 cali468d375460c [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:58:39.569922 ntpd[1489]: 15 Jul 23:58:39 ntpd[1489]: Listen normally on 18 cali7c02870beea [fe80::ecee:eeff:feee:eeee%15]:123 Jul 15 23:58:39.567705 ntpd[1489]: Listen normally on 9 cali25e3abc9277 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:58:39.567796 ntpd[1489]: Listen normally on 10 vxlan.calico [fe80::64d0:82ff:fe63:5ab8%5]:123 Jul 15 23:58:39.567855 ntpd[1489]: Listen normally on 11 cali1967065c9e9 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:58:39.568765 ntpd[1489]: Listen normally on 12 cali3c0bb10aa8b [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:58:39.568864 ntpd[1489]: Listen normally on 13 cali4ec107fc8a6 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:58:39.568953 ntpd[1489]: Listen normally on 14 calid250f2a9505 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:58:39.569011 ntpd[1489]: Listen normally on 15 calic7df43f4535 [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:58:39.569065 ntpd[1489]: Listen normally on 16 calie129728f4ab [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:58:39.569118 ntpd[1489]: Listen normally on 17 cali468d375460c [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:58:39.569175 ntpd[1489]: Listen normally on 18 cali7c02870beea [fe80::ecee:eeff:feee:eeee%15]:123 Jul 15 23:58:39.644546 kubelet[2737]: I0715 23:58:39.643783 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:40.718443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990006953.mount: Deactivated successfully. Jul 15 23:58:41.634781 containerd[1533]: time="2025-07-15T23:58:41.634713069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:41.635992 containerd[1533]: time="2025-07-15T23:58:41.635933377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 23:58:41.637607 containerd[1533]: time="2025-07-15T23:58:41.637514491Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:41.640847 containerd[1533]: time="2025-07-15T23:58:41.640746059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:41.641915 containerd[1533]: time="2025-07-15T23:58:41.641826847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.780475745s" Jul 15 23:58:41.641915 containerd[1533]: time="2025-07-15T23:58:41.641911785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 23:58:41.644037 containerd[1533]: time="2025-07-15T23:58:41.643987785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:58:41.646543 containerd[1533]: time="2025-07-15T23:58:41.646509451Z" level=info msg="CreateContainer within sandbox \"8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:58:41.659563 containerd[1533]: time="2025-07-15T23:58:41.658062704Z" level=info msg="Container e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:41.677917 containerd[1533]: time="2025-07-15T23:58:41.677826391Z" level=info msg="CreateContainer within sandbox \"8683bcb1d5086bfb3c70f96a870b6d1382672674dfef3e8a5a6e2a044d42a839\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\"" Jul 15 23:58:41.678906 containerd[1533]: time="2025-07-15T23:58:41.678796810Z" level=info msg="StartContainer for \"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\"" Jul 15 23:58:41.681694 containerd[1533]: time="2025-07-15T23:58:41.681652132Z" level=info msg="connecting to shim e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf" address="unix:///run/containerd/s/77279aa98ffec10c3b31a1c1b4a5536ccd6c26fb42b211eea2625c0e4fc89b43" protocol=ttrpc version=3 Jul 15 23:58:41.718153 systemd[1]: Started cri-containerd-e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf.scope - libcontainer container e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf. Jul 15 23:58:41.818240 containerd[1533]: time="2025-07-15T23:58:41.818189828Z" level=info msg="StartContainer for \"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" returns successfully" Jul 15 23:58:42.663865 containerd[1533]: time="2025-07-15T23:58:42.663812756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:42.665940 containerd[1533]: time="2025-07-15T23:58:42.665902300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 23:58:42.668086 containerd[1533]: time="2025-07-15T23:58:42.667910514Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:42.676377 containerd[1533]: time="2025-07-15T23:58:42.676329461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:42.677842 containerd[1533]: time="2025-07-15T23:58:42.677807518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.033768554s" Jul 15 23:58:42.678416 containerd[1533]: time="2025-07-15T23:58:42.677942639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 23:58:42.690634 kubelet[2737]: I0715 23:58:42.690513 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5844bd86db-66x54" podStartSLOduration=33.281701781 podStartE2EDuration="36.690480801s" podCreationTimestamp="2025-07-15 23:58:06 +0000 UTC" firstStartedPulling="2025-07-15 23:58:34.452387663 +0000 UTC m=+45.374034561" lastFinishedPulling="2025-07-15 23:58:37.861166683 +0000 UTC m=+48.782813581" observedRunningTime="2025-07-15 23:58:38.66413592 +0000 UTC m=+49.585782822" watchObservedRunningTime="2025-07-15 23:58:42.690480801 +0000 UTC m=+53.612127700" Jul 15 23:58:42.692223 containerd[1533]: time="2025-07-15T23:58:42.692194455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:58:42.694935 containerd[1533]: time="2025-07-15T23:58:42.694459869Z" level=info msg="CreateContainer within sandbox \"8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:58:42.695456 kubelet[2737]: I0715 23:58:42.695338 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-q2l8r" podStartSLOduration=25.696323031 podStartE2EDuration="32.695316695s" podCreationTimestamp="2025-07-15 23:58:10 +0000 UTC" firstStartedPulling="2025-07-15 23:58:34.644348712 +0000 UTC m=+45.565995604" lastFinishedPulling="2025-07-15 23:58:41.643342372 +0000 UTC m=+52.564989268" observedRunningTime="2025-07-15 23:58:42.6868455 +0000 UTC m=+53.608492422" watchObservedRunningTime="2025-07-15 23:58:42.695316695 +0000 UTC m=+53.616963598" Jul 15 23:58:42.744498 containerd[1533]: time="2025-07-15T23:58:42.744449904Z" level=info msg="Container dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:42.756691 containerd[1533]: time="2025-07-15T23:58:42.756640285Z" level=info msg="CreateContainer within sandbox \"8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a\"" Jul 15 23:58:42.758765 containerd[1533]: time="2025-07-15T23:58:42.757657688Z" level=info msg="StartContainer for \"dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a\"" Jul 15 23:58:42.760058 containerd[1533]: time="2025-07-15T23:58:42.760025672Z" level=info msg="connecting to shim dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a" address="unix:///run/containerd/s/587e28db19b346b5bbd39a59486128b6b74f0599fc7ae4bc2d19d87c272f67af" protocol=ttrpc version=3 Jul 15 23:58:42.801156 systemd[1]: Started cri-containerd-dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a.scope - libcontainer container dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a. Jul 15 23:58:42.875565 containerd[1533]: time="2025-07-15T23:58:42.875454999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"aa2c99ce378f01ddeebe556b16452a38eb4fde0df52b77596a6020db2f2bb6b1\" pid:5025 exit_status:1 exited_at:{seconds:1752623922 nanos:873448641}" Jul 15 23:58:42.886222 containerd[1533]: time="2025-07-15T23:58:42.886180633Z" level=info msg="StartContainer for \"dbc087f4caef01b6e4c2eccc62d4f4f13e85c370efaf50232c125b896db6fa7a\" returns successfully" Jul 15 23:58:43.024969 kubelet[2737]: I0715 23:58:43.024414 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:43.805517 containerd[1533]: time="2025-07-15T23:58:43.805156999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"fb98a74f2886bd9a75e01bad9bdd14aaa60677e337d698b7993c6f59e7ccb447\" pid:5076 exit_status:1 exited_at:{seconds:1752623923 nanos:804517189}" Jul 15 23:58:44.860349 containerd[1533]: time="2025-07-15T23:58:44.860289530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"88b51fb24a1abf9dba2039d163074f7e1fae7f9c2e61db60183c75a9592bbfa8\" pid:5102 exit_status:1 exited_at:{seconds:1752623924 nanos:859789706}" Jul 15 23:58:45.679433 containerd[1533]: time="2025-07-15T23:58:45.679340183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:45.680864 containerd[1533]: time="2025-07-15T23:58:45.680790698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 23:58:45.682271 containerd[1533]: time="2025-07-15T23:58:45.681985301Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:45.684720 containerd[1533]: time="2025-07-15T23:58:45.684683078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:45.685903 containerd[1533]: time="2025-07-15T23:58:45.685591804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.993355493s" Jul 15 23:58:45.685903 containerd[1533]: time="2025-07-15T23:58:45.685634539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 23:58:45.687649 containerd[1533]: time="2025-07-15T23:58:45.687614891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:58:45.710380 containerd[1533]: time="2025-07-15T23:58:45.709598278Z" level=info msg="CreateContainer within sandbox \"f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:58:45.721090 containerd[1533]: time="2025-07-15T23:58:45.721051149Z" level=info msg="Container 737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:45.733310 containerd[1533]: time="2025-07-15T23:58:45.733265884Z" level=info msg="CreateContainer within sandbox \"f81ff3370e11842607f24bd263c4fc289285ad7996e5717fd395b76eee5cba9e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\"" Jul 15 23:58:45.733798 containerd[1533]: time="2025-07-15T23:58:45.733754915Z" level=info msg="StartContainer for \"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\"" Jul 15 23:58:45.737093 containerd[1533]: time="2025-07-15T23:58:45.736859177Z" level=info msg="connecting to shim 737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94" address="unix:///run/containerd/s/25405190b62be58ce906d7e9da53f544e755bd36d69e88f4ea078c6b3eaeaea3" protocol=ttrpc version=3 Jul 15 23:58:45.778109 systemd[1]: Started cri-containerd-737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94.scope - libcontainer container 737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94. Jul 15 23:58:45.854133 containerd[1533]: time="2025-07-15T23:58:45.854083283Z" level=info msg="StartContainer for \"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\" returns successfully" Jul 15 23:58:45.894995 containerd[1533]: time="2025-07-15T23:58:45.893849874Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:45.897084 containerd[1533]: time="2025-07-15T23:58:45.897003573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:58:45.901587 containerd[1533]: time="2025-07-15T23:58:45.901535683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 213.877864ms" Jul 15 23:58:45.901758 containerd[1533]: time="2025-07-15T23:58:45.901730869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:58:45.904823 containerd[1533]: time="2025-07-15T23:58:45.904787181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:58:45.905903 containerd[1533]: time="2025-07-15T23:58:45.905786809Z" level=info msg="CreateContainer within sandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:58:45.915478 containerd[1533]: time="2025-07-15T23:58:45.915446811Z" level=info msg="Container 23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:45.936421 containerd[1533]: time="2025-07-15T23:58:45.936312257Z" level=info msg="CreateContainer within sandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\"" Jul 15 23:58:45.939302 containerd[1533]: time="2025-07-15T23:58:45.939271098Z" level=info msg="StartContainer for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\"" Jul 15 23:58:45.942060 containerd[1533]: time="2025-07-15T23:58:45.942015864Z" level=info msg="connecting to shim 23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d" address="unix:///run/containerd/s/afc325730df4d565fa3949333021a593109371785137d0451882e89aff1d9ae0" protocol=ttrpc version=3 Jul 15 23:58:45.981110 systemd[1]: Started cri-containerd-23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d.scope - libcontainer container 23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d. Jul 15 23:58:46.104655 containerd[1533]: time="2025-07-15T23:58:46.104582644Z" level=info msg="StartContainer for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" returns successfully" Jul 15 23:58:46.124911 containerd[1533]: time="2025-07-15T23:58:46.124263764Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:46.126983 containerd[1533]: time="2025-07-15T23:58:46.126948140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:58:46.129991 containerd[1533]: time="2025-07-15T23:58:46.129942011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 225.086516ms" Jul 15 23:58:46.130124 containerd[1533]: time="2025-07-15T23:58:46.130100780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:58:46.132441 containerd[1533]: time="2025-07-15T23:58:46.132411603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:58:46.134540 containerd[1533]: time="2025-07-15T23:58:46.134499466Z" level=info msg="CreateContainer within sandbox \"6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:58:46.144429 containerd[1533]: time="2025-07-15T23:58:46.143752738Z" level=info msg="Container d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:46.165193 containerd[1533]: time="2025-07-15T23:58:46.165152387Z" level=info msg="CreateContainer within sandbox \"6bcd7bf2b1ae8dddf60facc70498d7923d30a9aa694cf5ddbeb073a4635f75b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732\"" Jul 15 23:58:46.166453 containerd[1533]: time="2025-07-15T23:58:46.166405875Z" level=info msg="StartContainer for \"d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732\"" Jul 15 23:58:46.168503 containerd[1533]: time="2025-07-15T23:58:46.168421637Z" level=info msg="connecting to shim d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732" address="unix:///run/containerd/s/04580e2064eb769f29814b3f5f9e5b95bd15fa5e79d569a49b46326820e7ca0f" protocol=ttrpc version=3 Jul 15 23:58:46.208138 systemd[1]: Started cri-containerd-d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732.scope - libcontainer container d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732. Jul 15 23:58:46.291825 containerd[1533]: time="2025-07-15T23:58:46.291776001Z" level=info msg="StartContainer for \"d51b2b8d0e7fd45f8f40da01041e5e26dac1f6864dea1851f821f03d9e18f732\" returns successfully" Jul 15 23:58:46.732964 kubelet[2737]: I0715 23:58:46.731852 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74787ffd8f-n6dr5" podStartSLOduration=29.418823942 podStartE2EDuration="39.731826808s" podCreationTimestamp="2025-07-15 23:58:07 +0000 UTC" firstStartedPulling="2025-07-15 23:58:35.818056345 +0000 UTC m=+46.739703238" lastFinishedPulling="2025-07-15 23:58:46.131059208 +0000 UTC m=+57.052706104" observedRunningTime="2025-07-15 23:58:46.730250541 +0000 UTC m=+57.651897442" watchObservedRunningTime="2025-07-15 23:58:46.731826808 +0000 UTC m=+57.653473711" Jul 15 23:58:46.791846 kubelet[2737]: I0715 23:58:46.790002 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5844bd86db-9djkg" podStartSLOduration=30.299651881 podStartE2EDuration="40.789978205s" podCreationTimestamp="2025-07-15 23:58:06 +0000 UTC" firstStartedPulling="2025-07-15 23:58:35.413116899 +0000 UTC m=+46.334763802" lastFinishedPulling="2025-07-15 23:58:45.903443227 +0000 UTC m=+56.825090126" observedRunningTime="2025-07-15 23:58:46.788620233 +0000 UTC m=+57.710267116" watchObservedRunningTime="2025-07-15 23:58:46.789978205 +0000 UTC m=+57.711625110" Jul 15 23:58:46.791846 kubelet[2737]: I0715 23:58:46.790627 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64b477f76d-65q9p" podStartSLOduration=25.362464622 podStartE2EDuration="35.790602123s" podCreationTimestamp="2025-07-15 23:58:11 +0000 UTC" firstStartedPulling="2025-07-15 23:58:35.258622565 +0000 UTC m=+46.180269458" lastFinishedPulling="2025-07-15 23:58:45.68676006 +0000 UTC m=+56.608406959" observedRunningTime="2025-07-15 23:58:46.76584469 +0000 UTC m=+57.687491592" watchObservedRunningTime="2025-07-15 23:58:46.790602123 +0000 UTC m=+57.712249026" Jul 15 23:58:46.984016 containerd[1533]: time="2025-07-15T23:58:46.983572508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\" id:\"06157f846acfb2d20def4791b9c4daff1a5c81eeef5ee72be103127187af64a1\" pid:5237 exited_at:{seconds:1752623926 nanos:983190252}" Jul 15 23:58:47.015776 containerd[1533]: time="2025-07-15T23:58:47.015478249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\" id:\"f51ebd63304f685cc7f8ff0dd7c66f1fd780707726444b25b1d314b76d3407c6\" pid:5262 exited_at:{seconds:1752623927 nanos:12868657}" Jul 15 23:58:47.644320 containerd[1533]: time="2025-07-15T23:58:47.644265193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:47.646516 containerd[1533]: time="2025-07-15T23:58:47.646475707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 23:58:47.647517 containerd[1533]: time="2025-07-15T23:58:47.647480414Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:47.651546 containerd[1533]: time="2025-07-15T23:58:47.651499233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:58:47.652623 containerd[1533]: time="2025-07-15T23:58:47.652575818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.519992813s" Jul 15 23:58:47.652720 containerd[1533]: time="2025-07-15T23:58:47.652626619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 23:58:47.657059 containerd[1533]: time="2025-07-15T23:58:47.657025311Z" level=info msg="CreateContainer within sandbox \"8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:58:47.672101 containerd[1533]: time="2025-07-15T23:58:47.671990744Z" level=info msg="Container a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:58:47.691427 containerd[1533]: time="2025-07-15T23:58:47.691380589Z" level=info msg="CreateContainer within sandbox \"8e3bb7c491bb1dda46cb4bd4a7456d3728a5148ede2eb5e5d2feb633c503358b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391\"" Jul 15 23:58:47.693525 containerd[1533]: time="2025-07-15T23:58:47.693488605Z" level=info msg="StartContainer for \"a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391\"" Jul 15 23:58:47.695903 containerd[1533]: time="2025-07-15T23:58:47.695840090Z" level=info msg="connecting to shim a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391" address="unix:///run/containerd/s/587e28db19b346b5bbd39a59486128b6b74f0599fc7ae4bc2d19d87c272f67af" protocol=ttrpc version=3 Jul 15 23:58:47.748329 systemd[1]: Started cri-containerd-a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391.scope - libcontainer container a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391. Jul 15 23:58:47.760421 kubelet[2737]: I0715 23:58:47.760245 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:47.761988 kubelet[2737]: I0715 23:58:47.761968 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:47.907666 containerd[1533]: time="2025-07-15T23:58:47.906547652Z" level=info msg="StartContainer for \"a251c8e5f26b6ea4e8cc056c5031b47101096a2ecd8d4a4460f61ca2b51d9391\" returns successfully" Jul 15 23:58:48.416904 kubelet[2737]: I0715 23:58:48.416686 2737 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:58:48.416904 kubelet[2737]: I0715 23:58:48.416725 2737 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:58:48.789592 kubelet[2737]: I0715 23:58:48.788992 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h292n" podStartSLOduration=24.992048692 podStartE2EDuration="37.788948812s" podCreationTimestamp="2025-07-15 23:58:11 +0000 UTC" firstStartedPulling="2025-07-15 23:58:34.857912743 +0000 UTC m=+45.779559649" lastFinishedPulling="2025-07-15 23:58:47.654812882 +0000 UTC m=+58.576459769" observedRunningTime="2025-07-15 23:58:48.788219752 +0000 UTC m=+59.709866701" watchObservedRunningTime="2025-07-15 23:58:48.788948812 +0000 UTC m=+59.710595714" Jul 15 23:58:51.293792 containerd[1533]: time="2025-07-15T23:58:51.293733451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\" id:\"4b3e7485801b5905212690fb37dcba044c5ebcec846227c9a7dafed324440957\" pid:5331 exited_at:{seconds:1752623931 nanos:293204778}" Jul 15 23:58:51.351433 containerd[1533]: time="2025-07-15T23:58:51.351305004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"583a033d1302f8b975e6dfed361ac4206d1275e652d5ef0229cb211587524907\" pid:5348 exited_at:{seconds:1752623931 nanos:350988011}" Jul 15 23:58:53.283255 systemd[1]: Started sshd@7-10.128.0.36:22-139.178.89.65:35628.service - OpenSSH per-connection server daemon (139.178.89.65:35628). Jul 15 23:58:53.615509 sshd[5373]: Accepted publickey for core from 139.178.89.65 port 35628 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:53.620414 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:53.632972 systemd-logind[1507]: New session 8 of user core. Jul 15 23:58:53.639946 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:58:54.031912 sshd[5375]: Connection closed by 139.178.89.65 port 35628 Jul 15 23:58:54.034176 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:54.045536 systemd[1]: sshd@7-10.128.0.36:22-139.178.89.65:35628.service: Deactivated successfully. Jul 15 23:58:54.050574 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:58:54.055640 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:58:54.059469 systemd-logind[1507]: Removed session 8. Jul 15 23:58:59.091381 systemd[1]: Started sshd@8-10.128.0.36:22-139.178.89.65:60616.service - OpenSSH per-connection server daemon (139.178.89.65:60616). Jul 15 23:58:59.332929 kubelet[2737]: I0715 23:58:59.331200 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:58:59.428278 sshd[5392]: Accepted publickey for core from 139.178.89.65 port 60616 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:58:59.430837 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:58:59.449961 systemd-logind[1507]: New session 9 of user core. Jul 15 23:58:59.455975 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:58:59.775392 sshd[5394]: Connection closed by 139.178.89.65 port 60616 Jul 15 23:58:59.776147 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Jul 15 23:58:59.785485 systemd[1]: sshd@8-10.128.0.36:22-139.178.89.65:60616.service: Deactivated successfully. Jul 15 23:58:59.789282 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:58:59.792162 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:58:59.794480 systemd-logind[1507]: Removed session 9. Jul 15 23:59:04.838239 systemd[1]: Started sshd@9-10.128.0.36:22-139.178.89.65:60624.service - OpenSSH per-connection server daemon (139.178.89.65:60624). Jul 15 23:59:05.166235 sshd[5410]: Accepted publickey for core from 139.178.89.65 port 60624 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:05.169769 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:05.180774 systemd-logind[1507]: New session 10 of user core. Jul 15 23:59:05.189030 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:59:05.514253 sshd[5412]: Connection closed by 139.178.89.65 port 60624 Jul 15 23:59:05.515170 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:05.526263 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:59:05.528494 systemd[1]: sshd@9-10.128.0.36:22-139.178.89.65:60624.service: Deactivated successfully. Jul 15 23:59:05.533146 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:59:05.537699 systemd-logind[1507]: Removed session 10. Jul 15 23:59:05.574647 systemd[1]: Started sshd@10-10.128.0.36:22-139.178.89.65:60626.service - OpenSSH per-connection server daemon (139.178.89.65:60626). Jul 15 23:59:05.898430 sshd[5425]: Accepted publickey for core from 139.178.89.65 port 60626 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:05.901198 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:05.909245 systemd-logind[1507]: New session 11 of user core. Jul 15 23:59:05.919184 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:59:06.327194 sshd[5427]: Connection closed by 139.178.89.65 port 60626 Jul 15 23:59:06.326244 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:06.334584 systemd[1]: sshd@10-10.128.0.36:22-139.178.89.65:60626.service: Deactivated successfully. Jul 15 23:59:06.339786 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:59:06.343593 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:59:06.348620 systemd-logind[1507]: Removed session 11. Jul 15 23:59:06.387002 systemd[1]: Started sshd@11-10.128.0.36:22-139.178.89.65:60628.service - OpenSSH per-connection server daemon (139.178.89.65:60628). Jul 15 23:59:06.709531 sshd[5437]: Accepted publickey for core from 139.178.89.65 port 60628 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:06.712620 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:06.721018 systemd-logind[1507]: New session 12 of user core. Jul 15 23:59:06.729504 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:59:07.116389 sshd[5439]: Connection closed by 139.178.89.65 port 60628 Jul 15 23:59:07.119120 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:07.127913 systemd[1]: sshd@11-10.128.0.36:22-139.178.89.65:60628.service: Deactivated successfully. Jul 15 23:59:07.134287 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:59:07.137975 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:59:07.145364 systemd-logind[1507]: Removed session 12. Jul 15 23:59:12.176577 systemd[1]: Started sshd@12-10.128.0.36:22-139.178.89.65:35966.service - OpenSSH per-connection server daemon (139.178.89.65:35966). Jul 15 23:59:12.534777 sshd[5457]: Accepted publickey for core from 139.178.89.65 port 35966 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:12.537261 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:12.551167 systemd-logind[1507]: New session 13 of user core. Jul 15 23:59:12.557101 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:59:12.876011 sshd[5462]: Connection closed by 139.178.89.65 port 35966 Jul 15 23:59:12.877053 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:12.884663 systemd[1]: sshd@12-10.128.0.36:22-139.178.89.65:35966.service: Deactivated successfully. Jul 15 23:59:12.885936 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:59:12.892011 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:59:12.898848 systemd-logind[1507]: Removed session 13. Jul 15 23:59:15.657907 kubelet[2737]: I0715 23:59:15.657467 2737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:59:15.817220 containerd[1533]: time="2025-07-15T23:59:15.817155190Z" level=info msg="StopContainer for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" with timeout 30 (s)" Jul 15 23:59:15.819449 containerd[1533]: time="2025-07-15T23:59:15.819417182Z" level=info msg="Stop container \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" with signal terminated" Jul 15 23:59:15.965898 systemd[1]: cri-containerd-23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d.scope: Deactivated successfully. Jul 15 23:59:15.966361 systemd[1]: cri-containerd-23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d.scope: Consumed 1.008s CPU time, 45.1M memory peak. Jul 15 23:59:15.973240 containerd[1533]: time="2025-07-15T23:59:15.973185243Z" level=info msg="received exit event container_id:\"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" id:\"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" pid:5167 exit_status:1 exited_at:{seconds:1752623955 nanos:972075886}" Jul 15 23:59:15.973869 containerd[1533]: time="2025-07-15T23:59:15.973643937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" id:\"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" pid:5167 exit_status:1 exited_at:{seconds:1752623955 nanos:972075886}" Jul 15 23:59:16.028301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d-rootfs.mount: Deactivated successfully. Jul 15 23:59:16.708578 containerd[1533]: time="2025-07-15T23:59:16.708523631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\" id:\"c679317fc1c9274c294f0971660470529bf0386d3127c4e28093a32140e0c9e3\" pid:5513 exit_status:1 exited_at:{seconds:1752623956 nanos:707180201}" Jul 15 23:59:17.231412 containerd[1533]: time="2025-07-15T23:59:17.231308772Z" level=info msg="StopContainer for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" returns successfully" Jul 15 23:59:17.235218 containerd[1533]: time="2025-07-15T23:59:17.235121468Z" level=info msg="StopPodSandbox for \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\"" Jul 15 23:59:17.235628 containerd[1533]: time="2025-07-15T23:59:17.235561953Z" level=info msg="Container to stop \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:59:17.271367 systemd[1]: cri-containerd-96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6.scope: Deactivated successfully. Jul 15 23:59:17.275237 containerd[1533]: time="2025-07-15T23:59:17.274292765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" id:\"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" pid:4706 exit_status:137 exited_at:{seconds:1752623957 nanos:272120406}" Jul 15 23:59:17.330564 containerd[1533]: time="2025-07-15T23:59:17.330483259Z" level=info msg="shim disconnected" id=96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6 namespace=k8s.io Jul 15 23:59:17.330564 containerd[1533]: time="2025-07-15T23:59:17.330524141Z" level=warning msg="cleaning up after shim disconnected" id=96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6 namespace=k8s.io Jul 15 23:59:17.332169 containerd[1533]: time="2025-07-15T23:59:17.330537124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:59:17.333473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6-rootfs.mount: Deactivated successfully. Jul 15 23:59:17.362915 containerd[1533]: time="2025-07-15T23:59:17.362661778Z" level=info msg="received exit event sandbox_id:\"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" exit_status:137 exited_at:{seconds:1752623957 nanos:272120406}" Jul 15 23:59:17.372302 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6-shm.mount: Deactivated successfully. Jul 15 23:59:17.474573 systemd-networkd[1440]: calie129728f4ab: Link DOWN Jul 15 23:59:17.474598 systemd-networkd[1440]: calie129728f4ab: Lost carrier Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.467 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.468 [INFO][5578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" iface="eth0" netns="/var/run/netns/cni-a91cd0b1-2eae-064d-ec15-b5e4ceb003a7" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.469 [INFO][5578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" iface="eth0" netns="/var/run/netns/cni-a91cd0b1-2eae-064d-ec15-b5e4ceb003a7" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.484 [INFO][5578] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" after=16.080913ms iface="eth0" netns="/var/run/netns/cni-a91cd0b1-2eae-064d-ec15-b5e4ceb003a7" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.485 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.485 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.563 [INFO][5586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.563 [INFO][5586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.563 [INFO][5586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.629 [INFO][5586] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.629 [INFO][5586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.632 [INFO][5586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:17.641532 containerd[1533]: 2025-07-15 23:59:17.636 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:17.647954 containerd[1533]: time="2025-07-15T23:59:17.643956658Z" level=info msg="TearDown network for sandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" successfully" Jul 15 23:59:17.648068 containerd[1533]: time="2025-07-15T23:59:17.647964231Z" level=info msg="StopPodSandbox for \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" returns successfully" Jul 15 23:59:17.653769 systemd[1]: run-netns-cni\x2da91cd0b1\x2d2eae\x2d064d\x2dec15\x2db5e4ceb003a7.mount: Deactivated successfully. Jul 15 23:59:17.716106 kubelet[2737]: I0715 23:59:17.716062 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg6mf\" (UniqueName: \"kubernetes.io/projected/e7f09e50-9536-425d-816d-b82a0bd87ca6-kube-api-access-pg6mf\") pod \"e7f09e50-9536-425d-816d-b82a0bd87ca6\" (UID: \"e7f09e50-9536-425d-816d-b82a0bd87ca6\") " Jul 15 23:59:17.717538 kubelet[2737]: I0715 23:59:17.716126 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7f09e50-9536-425d-816d-b82a0bd87ca6-calico-apiserver-certs\") pod \"e7f09e50-9536-425d-816d-b82a0bd87ca6\" (UID: \"e7f09e50-9536-425d-816d-b82a0bd87ca6\") " Jul 15 23:59:17.728176 kubelet[2737]: I0715 23:59:17.728133 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7f09e50-9536-425d-816d-b82a0bd87ca6-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e7f09e50-9536-425d-816d-b82a0bd87ca6" (UID: "e7f09e50-9536-425d-816d-b82a0bd87ca6"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 23:59:17.728296 kubelet[2737]: I0715 23:59:17.728264 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f09e50-9536-425d-816d-b82a0bd87ca6-kube-api-access-pg6mf" (OuterVolumeSpecName: "kube-api-access-pg6mf") pod "e7f09e50-9536-425d-816d-b82a0bd87ca6" (UID: "e7f09e50-9536-425d-816d-b82a0bd87ca6"). InnerVolumeSpecName "kube-api-access-pg6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:59:17.730732 systemd[1]: var-lib-kubelet-pods-e7f09e50\x2d9536\x2d425d\x2d816d\x2db82a0bd87ca6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpg6mf.mount: Deactivated successfully. Jul 15 23:59:17.738687 systemd[1]: var-lib-kubelet-pods-e7f09e50\x2d9536\x2d425d\x2d816d\x2db82a0bd87ca6-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 15 23:59:17.817058 kubelet[2737]: I0715 23:59:17.817005 2737 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7f09e50-9536-425d-816d-b82a0bd87ca6-calico-apiserver-certs\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:59:17.817378 kubelet[2737]: I0715 23:59:17.817333 2737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg6mf\" (UniqueName: \"kubernetes.io/projected/e7f09e50-9536-425d-816d-b82a0bd87ca6-kube-api-access-pg6mf\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:59:17.884376 kubelet[2737]: I0715 23:59:17.884329 2737 scope.go:117] "RemoveContainer" containerID="23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d" Jul 15 23:59:17.894989 containerd[1533]: time="2025-07-15T23:59:17.893388282Z" level=info msg="RemoveContainer for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\"" Jul 15 23:59:17.904212 systemd[1]: Removed slice kubepods-besteffort-pode7f09e50_9536_425d_816d_b82a0bd87ca6.slice - libcontainer container kubepods-besteffort-pode7f09e50_9536_425d_816d_b82a0bd87ca6.slice. Jul 15 23:59:17.904389 systemd[1]: kubepods-besteffort-pode7f09e50_9536_425d_816d_b82a0bd87ca6.slice: Consumed 1.058s CPU time, 45.4M memory peak. Jul 15 23:59:17.908102 containerd[1533]: time="2025-07-15T23:59:17.908056505Z" level=info msg="RemoveContainer for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" returns successfully" Jul 15 23:59:17.909332 kubelet[2737]: I0715 23:59:17.909298 2737 scope.go:117] "RemoveContainer" containerID="23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d" Jul 15 23:59:17.909588 containerd[1533]: time="2025-07-15T23:59:17.909532119Z" level=error msg="ContainerStatus for \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\": not found" Jul 15 23:59:17.909748 kubelet[2737]: E0715 23:59:17.909717 2737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\": not found" containerID="23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d" Jul 15 23:59:17.910388 kubelet[2737]: I0715 23:59:17.909761 2737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d"} err="failed to get container status \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"23d5808b6fba17c12e642012690e7c2744696986976888a273c1b9f6d7b71e6d\": not found" Jul 15 23:59:17.940225 systemd[1]: Started sshd@13-10.128.0.36:22-139.178.89.65:35972.service - OpenSSH per-connection server daemon (139.178.89.65:35972). Jul 15 23:59:18.273052 sshd[5605]: Accepted publickey for core from 139.178.89.65 port 35972 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:18.275500 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:18.287173 systemd-logind[1507]: New session 14 of user core. Jul 15 23:59:18.297047 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:59:18.635606 sshd[5607]: Connection closed by 139.178.89.65 port 35972 Jul 15 23:59:18.637378 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:18.649380 systemd[1]: sshd@13-10.128.0.36:22-139.178.89.65:35972.service: Deactivated successfully. Jul 15 23:59:18.654382 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:59:18.659603 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:59:18.663742 systemd-logind[1507]: Removed session 14. Jul 15 23:59:19.256666 kubelet[2737]: I0715 23:59:19.256598 2737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f09e50-9536-425d-816d-b82a0bd87ca6" path="/var/lib/kubelet/pods/e7f09e50-9536-425d-816d-b82a0bd87ca6/volumes" Jul 15 23:59:19.565512 ntpd[1489]: Deleting interface #16 calie129728f4ab, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=40 secs Jul 15 23:59:19.566103 ntpd[1489]: 15 Jul 23:59:19 ntpd[1489]: Deleting interface #16 calie129728f4ab, fe80::ecee:eeff:feee:eeee%13#123, interface stats: received=0, sent=0, dropped=0, active_time=40 secs Jul 15 23:59:21.391718 containerd[1533]: time="2025-07-15T23:59:21.391635024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\" id:\"00c4a66b02f3d8130c6cb1ef1b064035945e54e06da27025ddda9eb08f73ae92\" pid:5637 exited_at:{seconds:1752623961 nanos:390224913}" Jul 15 23:59:21.435381 containerd[1533]: time="2025-07-15T23:59:21.435294378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"373ad45e57529efd2ca2f2e1a551bae4194edc77167ed3f17c759037b3390347\" pid:5650 exited_at:{seconds:1752623961 nanos:433761114}" Jul 15 23:59:23.698216 systemd[1]: Started sshd@14-10.128.0.36:22-139.178.89.65:57270.service - OpenSSH per-connection server daemon (139.178.89.65:57270). Jul 15 23:59:24.048924 sshd[5667]: Accepted publickey for core from 139.178.89.65 port 57270 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:24.049260 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:24.059276 systemd-logind[1507]: New session 15 of user core. Jul 15 23:59:24.066465 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:59:24.427096 sshd[5669]: Connection closed by 139.178.89.65 port 57270 Jul 15 23:59:24.429248 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:24.443175 systemd[1]: sshd@14-10.128.0.36:22-139.178.89.65:57270.service: Deactivated successfully. Jul 15 23:59:24.448725 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:59:24.452041 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:59:24.455740 systemd-logind[1507]: Removed session 15. Jul 15 23:59:29.042303 containerd[1533]: time="2025-07-15T23:59:29.042245337Z" level=info msg="StopContainer for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" with timeout 30 (s)" Jul 15 23:59:29.045192 containerd[1533]: time="2025-07-15T23:59:29.045040070Z" level=info msg="Stop container \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" with signal terminated" Jul 15 23:59:29.116066 systemd[1]: cri-containerd-a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06.scope: Deactivated successfully. Jul 15 23:59:29.119056 systemd[1]: cri-containerd-a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06.scope: Consumed 1.702s CPU time, 54.8M memory peak. Jul 15 23:59:29.122981 containerd[1533]: time="2025-07-15T23:59:29.122368176Z" level=info msg="received exit event container_id:\"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" id:\"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" pid:4926 exit_status:1 exited_at:{seconds:1752623969 nanos:119272501}" Jul 15 23:59:29.124034 containerd[1533]: time="2025-07-15T23:59:29.123975623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" id:\"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" pid:4926 exit_status:1 exited_at:{seconds:1752623969 nanos:119272501}" Jul 15 23:59:29.183729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06-rootfs.mount: Deactivated successfully. Jul 15 23:59:29.205505 containerd[1533]: time="2025-07-15T23:59:29.205436441Z" level=info msg="StopContainer for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" returns successfully" Jul 15 23:59:29.206436 containerd[1533]: time="2025-07-15T23:59:29.206402504Z" level=info msg="StopPodSandbox for \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\"" Jul 15 23:59:29.206559 containerd[1533]: time="2025-07-15T23:59:29.206489585Z" level=info msg="Container to stop \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:59:29.246096 systemd[1]: cri-containerd-b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce.scope: Deactivated successfully. Jul 15 23:59:29.248329 containerd[1533]: time="2025-07-15T23:59:29.248203881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" id:\"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" pid:4400 exit_status:137 exited_at:{seconds:1752623969 nanos:246018176}" Jul 15 23:59:29.334243 containerd[1533]: time="2025-07-15T23:59:29.334096479Z" level=info msg="shim disconnected" id=b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce namespace=k8s.io Jul 15 23:59:29.334639 containerd[1533]: time="2025-07-15T23:59:29.334488922Z" level=warning msg="cleaning up after shim disconnected" id=b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce namespace=k8s.io Jul 15 23:59:29.334639 containerd[1533]: time="2025-07-15T23:59:29.334516488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:59:29.340721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce-rootfs.mount: Deactivated successfully. Jul 15 23:59:29.382909 containerd[1533]: time="2025-07-15T23:59:29.382160352Z" level=info msg="received exit event sandbox_id:\"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" exit_status:137 exited_at:{seconds:1752623969 nanos:246018176}" Jul 15 23:59:29.392525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce-shm.mount: Deactivated successfully. Jul 15 23:59:29.490325 systemd[1]: Started sshd@15-10.128.0.36:22-139.178.89.65:50542.service - OpenSSH per-connection server daemon (139.178.89.65:50542). Jul 15 23:59:29.549439 systemd-networkd[1440]: cali3c0bb10aa8b: Link DOWN Jul 15 23:59:29.549454 systemd-networkd[1440]: cali3c0bb10aa8b: Lost carrier Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.543 [INFO][5759] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.544 [INFO][5759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" iface="eth0" netns="/var/run/netns/cni-a53bd835-0a9b-7d82-5f78-b815d96d9837" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.547 [INFO][5759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" iface="eth0" netns="/var/run/netns/cni-a53bd835-0a9b-7d82-5f78-b815d96d9837" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.561 [INFO][5759] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" after=16.622641ms iface="eth0" netns="/var/run/netns/cni-a53bd835-0a9b-7d82-5f78-b815d96d9837" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.561 [INFO][5759] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.561 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.643 [INFO][5772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.644 [INFO][5772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.644 [INFO][5772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.706 [INFO][5772] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.707 [INFO][5772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.709 [INFO][5772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:29.718336 containerd[1533]: 2025-07-15 23:59:29.713 [INFO][5759] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:29.722471 containerd[1533]: time="2025-07-15T23:59:29.722287589Z" level=info msg="TearDown network for sandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" successfully" Jul 15 23:59:29.722471 containerd[1533]: time="2025-07-15T23:59:29.722341765Z" level=info msg="StopPodSandbox for \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" returns successfully" Jul 15 23:59:29.727283 systemd[1]: run-netns-cni\x2da53bd835\x2d0a9b\x2d7d82\x2d5f78\x2db815d96d9837.mount: Deactivated successfully. Jul 15 23:59:29.814626 kubelet[2737]: I0715 23:59:29.814084 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn67x\" (UniqueName: \"kubernetes.io/projected/13ab9fb8-5d89-46a3-931a-6a54b396abc5-kube-api-access-fn67x\") pod \"13ab9fb8-5d89-46a3-931a-6a54b396abc5\" (UID: \"13ab9fb8-5d89-46a3-931a-6a54b396abc5\") " Jul 15 23:59:29.814626 kubelet[2737]: I0715 23:59:29.814146 2737 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13ab9fb8-5d89-46a3-931a-6a54b396abc5-calico-apiserver-certs\") pod \"13ab9fb8-5d89-46a3-931a-6a54b396abc5\" (UID: \"13ab9fb8-5d89-46a3-931a-6a54b396abc5\") " Jul 15 23:59:29.827582 kubelet[2737]: I0715 23:59:29.827023 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ab9fb8-5d89-46a3-931a-6a54b396abc5-kube-api-access-fn67x" (OuterVolumeSpecName: "kube-api-access-fn67x") pod "13ab9fb8-5d89-46a3-931a-6a54b396abc5" (UID: "13ab9fb8-5d89-46a3-931a-6a54b396abc5"). InnerVolumeSpecName "kube-api-access-fn67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:59:29.827582 kubelet[2737]: I0715 23:59:29.827118 2737 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ab9fb8-5d89-46a3-931a-6a54b396abc5-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "13ab9fb8-5d89-46a3-931a-6a54b396abc5" (UID: "13ab9fb8-5d89-46a3-931a-6a54b396abc5"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 23:59:29.828046 systemd[1]: var-lib-kubelet-pods-13ab9fb8\x2d5d89\x2d46a3\x2d931a\x2d6a54b396abc5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfn67x.mount: Deactivated successfully. Jul 15 23:59:29.839008 sshd[5766]: Accepted publickey for core from 139.178.89.65 port 50542 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:29.842442 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:29.853207 systemd-logind[1507]: New session 16 of user core. Jul 15 23:59:29.860154 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:59:29.916089 kubelet[2737]: I0715 23:59:29.916038 2737 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn67x\" (UniqueName: \"kubernetes.io/projected/13ab9fb8-5d89-46a3-931a-6a54b396abc5-kube-api-access-fn67x\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:59:29.916089 kubelet[2737]: I0715 23:59:29.916084 2737 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13ab9fb8-5d89-46a3-931a-6a54b396abc5-calico-apiserver-certs\") on node \"ci-4372-0-1-nightly-20250715-2100-2d55144e0797c85416ce\" DevicePath \"\"" Jul 15 23:59:29.935918 kubelet[2737]: I0715 23:59:29.935761 2737 scope.go:117] "RemoveContainer" containerID="a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06" Jul 15 23:59:29.940852 containerd[1533]: time="2025-07-15T23:59:29.940729554Z" level=info msg="RemoveContainer for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\"" Jul 15 23:59:29.948130 systemd[1]: Removed slice kubepods-besteffort-pod13ab9fb8_5d89_46a3_931a_6a54b396abc5.slice - libcontainer container kubepods-besteffort-pod13ab9fb8_5d89_46a3_931a_6a54b396abc5.slice. Jul 15 23:59:29.948302 systemd[1]: kubepods-besteffort-pod13ab9fb8_5d89_46a3_931a_6a54b396abc5.slice: Consumed 1.751s CPU time, 55.1M memory peak. Jul 15 23:59:29.958442 containerd[1533]: time="2025-07-15T23:59:29.957313959Z" level=info msg="RemoveContainer for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" returns successfully" Jul 15 23:59:29.961178 kubelet[2737]: I0715 23:59:29.961124 2737 scope.go:117] "RemoveContainer" containerID="a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06" Jul 15 23:59:29.964696 containerd[1533]: time="2025-07-15T23:59:29.963137331Z" level=error msg="ContainerStatus for \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\": not found" Jul 15 23:59:29.975104 kubelet[2737]: E0715 23:59:29.974059 2737 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\": not found" containerID="a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06" Jul 15 23:59:29.976406 kubelet[2737]: I0715 23:59:29.975941 2737 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06"} err="failed to get container status \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7fd1e3e86fdc9c282ddda376000f1a3fd94e9cda4f7748607397f7e241cfe06\": not found" Jul 15 23:59:30.181512 systemd[1]: var-lib-kubelet-pods-13ab9fb8\x2d5d89\x2d46a3\x2d931a\x2d6a54b396abc5-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 15 23:59:30.258627 sshd[5786]: Connection closed by 139.178.89.65 port 50542 Jul 15 23:59:30.259175 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:30.266805 systemd[1]: sshd@15-10.128.0.36:22-139.178.89.65:50542.service: Deactivated successfully. Jul 15 23:59:30.271584 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:59:30.276700 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:59:30.279732 systemd-logind[1507]: Removed session 16. Jul 15 23:59:30.318521 systemd[1]: Started sshd@16-10.128.0.36:22-139.178.89.65:50558.service - OpenSSH per-connection server daemon (139.178.89.65:50558). Jul 15 23:59:30.652641 sshd[5799]: Accepted publickey for core from 139.178.89.65 port 50558 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:30.655210 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:30.669965 systemd-logind[1507]: New session 17 of user core. Jul 15 23:59:30.676781 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:59:31.076144 sshd[5803]: Connection closed by 139.178.89.65 port 50558 Jul 15 23:59:31.077215 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:31.084566 systemd[1]: sshd@16-10.128.0.36:22-139.178.89.65:50558.service: Deactivated successfully. Jul 15 23:59:31.090302 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:59:31.095014 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:59:31.099546 systemd-logind[1507]: Removed session 17. Jul 15 23:59:31.134032 systemd[1]: Started sshd@17-10.128.0.36:22-139.178.89.65:50572.service - OpenSSH per-connection server daemon (139.178.89.65:50572). Jul 15 23:59:31.259904 kubelet[2737]: I0715 23:59:31.259404 2737 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ab9fb8-5d89-46a3-931a-6a54b396abc5" path="/var/lib/kubelet/pods/13ab9fb8-5d89-46a3-931a-6a54b396abc5/volumes" Jul 15 23:59:31.458597 sshd[5814]: Accepted publickey for core from 139.178.89.65 port 50572 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:31.462432 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:31.474661 systemd-logind[1507]: New session 18 of user core. Jul 15 23:59:31.484101 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:59:31.565584 ntpd[1489]: Deleting interface #12 cali3c0bb10aa8b, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=52 secs Jul 15 23:59:31.566121 ntpd[1489]: 15 Jul 23:59:31 ntpd[1489]: Deleting interface #12 cali3c0bb10aa8b, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=52 secs Jul 15 23:59:35.114134 sshd[5816]: Connection closed by 139.178.89.65 port 50572 Jul 15 23:59:35.115099 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:35.124615 systemd[1]: sshd@17-10.128.0.36:22-139.178.89.65:50572.service: Deactivated successfully. Jul 15 23:59:35.131118 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:59:35.132862 systemd[1]: session-18.scope: Consumed 978ms CPU time, 75.7M memory peak. Jul 15 23:59:35.135625 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:59:35.141645 systemd-logind[1507]: Removed session 18. Jul 15 23:59:35.174240 systemd[1]: Started sshd@18-10.128.0.36:22-139.178.89.65:50574.service - OpenSSH per-connection server daemon (139.178.89.65:50574). Jul 15 23:59:35.496731 sshd[5835]: Accepted publickey for core from 139.178.89.65 port 50574 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:35.499336 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:35.511970 systemd-logind[1507]: New session 19 of user core. Jul 15 23:59:35.519110 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:59:36.083600 sshd[5838]: Connection closed by 139.178.89.65 port 50574 Jul 15 23:59:36.084389 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:36.093410 systemd[1]: sshd@18-10.128.0.36:22-139.178.89.65:50574.service: Deactivated successfully. Jul 15 23:59:36.093705 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:59:36.099211 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:59:36.104722 systemd-logind[1507]: Removed session 19. Jul 15 23:59:36.144528 systemd[1]: Started sshd@19-10.128.0.36:22-139.178.89.65:50576.service - OpenSSH per-connection server daemon (139.178.89.65:50576). Jul 15 23:59:36.486488 sshd[5848]: Accepted publickey for core from 139.178.89.65 port 50576 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:36.491164 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:36.500568 systemd-logind[1507]: New session 20 of user core. Jul 15 23:59:36.506257 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:59:36.824234 sshd[5850]: Connection closed by 139.178.89.65 port 50576 Jul 15 23:59:36.825291 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:36.834038 systemd[1]: sshd@19-10.128.0.36:22-139.178.89.65:50576.service: Deactivated successfully. Jul 15 23:59:36.838681 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:59:36.841114 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:59:36.843786 systemd-logind[1507]: Removed session 20. Jul 15 23:59:38.436753 containerd[1533]: time="2025-07-15T23:59:38.436686213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"083fa72e357164dbc5af5ba5c6924070ae8c1a06c0912893c858e36e55d8ec5d\" pid:5873 exited_at:{seconds:1752623978 nanos:436143949}" Jul 15 23:59:41.884359 systemd[1]: Started sshd@20-10.128.0.36:22-139.178.89.65:45458.service - OpenSSH per-connection server daemon (139.178.89.65:45458). Jul 15 23:59:42.219907 sshd[5891]: Accepted publickey for core from 139.178.89.65 port 45458 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:42.223229 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:42.236463 systemd-logind[1507]: New session 21 of user core. Jul 15 23:59:42.241415 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:59:42.564177 sshd[5894]: Connection closed by 139.178.89.65 port 45458 Jul 15 23:59:42.566825 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:42.578460 systemd[1]: sshd@20-10.128.0.36:22-139.178.89.65:45458.service: Deactivated successfully. Jul 15 23:59:42.586013 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:59:42.588632 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:59:42.591944 systemd-logind[1507]: Removed session 21. Jul 15 23:59:46.611640 containerd[1533]: time="2025-07-15T23:59:46.610857275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3420ae9e19aea67bf7d092418aca241c09af5a7aa1809ea2c408082c03bca1aa\" id:\"435dedd234b0eae420e8929e0dbd790ca7622535210401dafa48ffa3dc088abb\" pid:5918 exited_at:{seconds:1752623986 nanos:610470824}" Jul 15 23:59:46.974432 containerd[1533]: time="2025-07-15T23:59:46.974381217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\" id:\"eb181d8f64fd8848947c963217e7e97afc0291db2c2f5ad72dafaf1c46eaeb63\" pid:5942 exited_at:{seconds:1752623986 nanos:973812645}" Jul 15 23:59:47.179327 update_engine[1514]: I20250715 23:59:47.179085 1514 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 15 23:59:47.179327 update_engine[1514]: I20250715 23:59:47.179323 1514 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 15 23:59:47.180256 update_engine[1514]: I20250715 23:59:47.179632 1514 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 15 23:59:47.180608 update_engine[1514]: I20250715 23:59:47.180541 1514 omaha_request_params.cc:62] Current group set to alpha Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180708 1514 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180734 1514 update_attempter.cc:643] Scheduling an action processor start. Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180757 1514 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180827 1514 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180948 1514 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180963 1514 omaha_request_action.cc:272] Request: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: Jul 15 23:59:47.180998 update_engine[1514]: I20250715 23:59:47.180975 1514 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 15 23:59:47.182053 locksmithd[1592]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 15 23:59:47.182911 update_engine[1514]: I20250715 23:59:47.182835 1514 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 15 23:59:47.183381 update_engine[1514]: I20250715 23:59:47.183322 1514 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 15 23:59:47.223808 update_engine[1514]: E20250715 23:59:47.223738 1514 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 15 23:59:47.223961 update_engine[1514]: I20250715 23:59:47.223843 1514 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 15 23:59:47.625219 systemd[1]: Started sshd@21-10.128.0.36:22-139.178.89.65:45460.service - OpenSSH per-connection server daemon (139.178.89.65:45460). Jul 15 23:59:47.951774 sshd[5952]: Accepted publickey for core from 139.178.89.65 port 45460 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:47.954504 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:47.963915 systemd-logind[1507]: New session 22 of user core. Jul 15 23:59:47.971076 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:59:48.277407 sshd[5954]: Connection closed by 139.178.89.65 port 45460 Jul 15 23:59:48.278186 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:48.286105 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:59:48.287087 systemd[1]: sshd@21-10.128.0.36:22-139.178.89.65:45460.service: Deactivated successfully. Jul 15 23:59:48.292345 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:59:48.297990 systemd-logind[1507]: Removed session 22. Jul 15 23:59:49.290134 containerd[1533]: time="2025-07-15T23:59:49.290078759Z" level=info msg="StopPodSandbox for \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\"" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.356 [WARNING][5975] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.357 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.357 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" iface="eth0" netns="" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.357 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.357 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.405 [INFO][5982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.406 [INFO][5982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.408 [INFO][5982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.420 [WARNING][5982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.420 [INFO][5982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.423 [INFO][5982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:49.434663 containerd[1533]: 2025-07-15 23:59:49.428 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.436088 containerd[1533]: time="2025-07-15T23:59:49.434706698Z" level=info msg="TearDown network for sandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" successfully" Jul 15 23:59:49.436088 containerd[1533]: time="2025-07-15T23:59:49.434734616Z" level=info msg="StopPodSandbox for \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" returns successfully" Jul 15 23:59:49.436457 containerd[1533]: time="2025-07-15T23:59:49.436400396Z" level=info msg="RemovePodSandbox for \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\"" Jul 15 23:59:49.436457 containerd[1533]: time="2025-07-15T23:59:49.436440430Z" level=info msg="Forcibly stopping sandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\"" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.537 [WARNING][5996] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.538 [INFO][5996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.539 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" iface="eth0" netns="" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.539 [INFO][5996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.539 [INFO][5996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.582 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.582 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.582 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.593 [WARNING][6003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.593 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" HandleID="k8s-pod-network.b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--66x54-eth0" Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.597 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:49.607151 containerd[1533]: 2025-07-15 23:59:49.602 [INFO][5996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce" Jul 15 23:59:49.608851 containerd[1533]: time="2025-07-15T23:59:49.607143545Z" level=info msg="TearDown network for sandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" successfully" Jul 15 23:59:49.611007 containerd[1533]: time="2025-07-15T23:59:49.610970238Z" level=info msg="Ensure that sandbox b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce in task-service has been cleanup successfully" Jul 15 23:59:49.615538 containerd[1533]: time="2025-07-15T23:59:49.615496158Z" level=info msg="RemovePodSandbox \"b2f22e608363a09ba6f728ef4aec4d23df35ffbb7bf18bbf76546a9ef4ec32ce\" returns successfully" Jul 15 23:59:49.617181 containerd[1533]: time="2025-07-15T23:59:49.617145739Z" level=info msg="StopPodSandbox for \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\"" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.756 [WARNING][6017] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.756 [INFO][6017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.756 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" iface="eth0" netns="" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.756 [INFO][6017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.757 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.815 [INFO][6026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.816 [INFO][6026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.817 [INFO][6026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.841 [WARNING][6026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.841 [INFO][6026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.843 [INFO][6026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:49.850278 containerd[1533]: 2025-07-15 23:59:49.846 [INFO][6017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:49.851691 containerd[1533]: time="2025-07-15T23:59:49.851127719Z" level=info msg="TearDown network for sandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" successfully" Jul 15 23:59:49.851691 containerd[1533]: time="2025-07-15T23:59:49.851180624Z" level=info msg="StopPodSandbox for \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" returns successfully" Jul 15 23:59:49.854097 containerd[1533]: time="2025-07-15T23:59:49.852217270Z" level=info msg="RemovePodSandbox for \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\"" Jul 15 23:59:49.854097 containerd[1533]: time="2025-07-15T23:59:49.852287623Z" level=info msg="Forcibly stopping sandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\"" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.935 [WARNING][6040] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.936 [INFO][6040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.936 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" iface="eth0" netns="" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.936 [INFO][6040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.936 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.990 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.990 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:49.990 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:50.009 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:50.009 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" HandleID="k8s-pod-network.96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Workload="ci--4372--0--1--nightly--20250715--2100--2d55144e0797c85416ce-k8s-calico--apiserver--5844bd86db--9djkg-eth0" Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:50.011 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:59:50.015899 containerd[1533]: 2025-07-15 23:59:50.013 [INFO][6040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6" Jul 15 23:59:50.017113 containerd[1533]: time="2025-07-15T23:59:50.016597330Z" level=info msg="TearDown network for sandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" successfully" Jul 15 23:59:50.020169 containerd[1533]: time="2025-07-15T23:59:50.020111703Z" level=info msg="Ensure that sandbox 96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6 in task-service has been cleanup successfully" Jul 15 23:59:50.024688 containerd[1533]: time="2025-07-15T23:59:50.024655583Z" level=info msg="RemovePodSandbox \"96529a315725c36b65edf60fef9fa79744fcc260a6d0db3e1fed17154887bee6\" returns successfully" Jul 15 23:59:51.370157 containerd[1533]: time="2025-07-15T23:59:51.370101174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"737ab00ba150cf2eabc7b61ffef62842236973ef353b7982b118e4a51d4eca94\" id:\"e55889ae78b584ebad76e35a902cad10620a7a087e704417b5b3bcc897975cb8\" pid:6067 exited_at:{seconds:1752623991 nanos:369249071}" Jul 15 23:59:51.427213 containerd[1533]: time="2025-07-15T23:59:51.426859282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9ded5a13bf517f1d8c7cbd49fcdaa8536e3c686954ac20e0be2ad5b0a954ccf\" id:\"7688d14014bfa18030b83e13e5a28c6dc63732a85fa19209cfb03678353279d7\" pid:6085 exited_at:{seconds:1752623991 nanos:426153199}" Jul 15 23:59:53.336019 systemd[1]: Started sshd@22-10.128.0.36:22-139.178.89.65:35814.service - OpenSSH per-connection server daemon (139.178.89.65:35814). Jul 15 23:59:53.665119 sshd[6106]: Accepted publickey for core from 139.178.89.65 port 35814 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:59:53.668607 sshd-session[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:59:53.677957 systemd-logind[1507]: New session 23 of user core. Jul 15 23:59:53.687339 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:59:54.000926 sshd[6108]: Connection closed by 139.178.89.65 port 35814 Jul 15 23:59:54.002120 sshd-session[6106]: pam_unix(sshd:session): session closed for user core Jul 15 23:59:54.011721 systemd[1]: sshd@22-10.128.0.36:22-139.178.89.65:35814.service: Deactivated successfully. Jul 15 23:59:54.011775 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:59:54.018029 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:59:54.025151 systemd-logind[1507]: Removed session 23.