Jul 15 23:54:09.209882 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:54:09.209942 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:54:09.209962 kernel: BIOS-provided physical RAM map: Jul 15 23:54:09.209977 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 15 23:54:09.209990 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 15 23:54:09.210003 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 15 23:54:09.210023 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 15 23:54:09.210037 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 15 23:54:09.210051 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jul 15 23:54:09.210065 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jul 15 23:54:09.210079 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jul 15 23:54:09.210094 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 15 23:54:09.210108 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 15 23:54:09.210122 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 15 23:54:09.210145 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 15 23:54:09.210161 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 15 23:54:09.210177 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 15 23:54:09.210193 kernel: NX (Execute Disable) protection: active Jul 15 23:54:09.210209 kernel: APIC: Static calls initialized Jul 15 23:54:09.210226 kernel: efi: EFI v2.7 by EDK II Jul 15 23:54:09.210242 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jul 15 23:54:09.210257 kernel: random: crng init done Jul 15 23:54:09.210277 kernel: secureboot: Secure boot disabled Jul 15 23:54:09.210293 kernel: SMBIOS 2.4 present. Jul 15 23:54:09.210307 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jul 15 23:54:09.210332 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:54:09.210352 kernel: Hypervisor detected: KVM Jul 15 23:54:09.210367 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:54:09.210383 kernel: kvm-clock: using sched offset of 15854099263 cycles Jul 15 23:54:09.210400 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:54:09.210422 kernel: tsc: Detected 2299.998 MHz processor Jul 15 23:54:09.210437 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:54:09.210458 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:54:09.210510 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 15 23:54:09.210526 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jul 15 23:54:09.210540 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:54:09.210554 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 15 23:54:09.210569 kernel: Using GB pages for direct mapping Jul 15 23:54:09.210584 kernel: ACPI: Early table checksum verification disabled Jul 15 23:54:09.210599 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 15 23:54:09.210627 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 15 23:54:09.210644 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 15 23:54:09.210660 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 15 23:54:09.210676 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 15 23:54:09.210692 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jul 15 23:54:09.210709 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 15 23:54:09.210730 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 15 23:54:09.210747 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 15 23:54:09.210763 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 15 23:54:09.210780 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 15 23:54:09.210797 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 15 23:54:09.210815 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 15 23:54:09.210832 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 15 23:54:09.210857 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 15 23:54:09.210873 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 15 23:54:09.210895 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 15 23:54:09.210912 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 15 23:54:09.210929 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 15 23:54:09.210946 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 15 23:54:09.210962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 15 23:54:09.210978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 15 23:54:09.210995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 15 23:54:09.211012 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jul 15 23:54:09.211028 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jul 15 23:54:09.211049 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jul 15 23:54:09.211066 kernel: Zone ranges: Jul 15 23:54:09.211082 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:54:09.211097 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 15 23:54:09.211115 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:54:09.211130 kernel: Device empty Jul 15 23:54:09.211146 kernel: Movable zone start for each node Jul 15 23:54:09.211162 kernel: Early memory node ranges Jul 15 23:54:09.211180 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 15 23:54:09.211207 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 15 23:54:09.211223 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jul 15 23:54:09.211240 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jul 15 23:54:09.211256 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 15 23:54:09.211273 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 15 23:54:09.211291 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 15 23:54:09.211308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:54:09.211326 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 15 23:54:09.211343 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 15 23:54:09.211360 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jul 15 23:54:09.211380 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 15 23:54:09.211398 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 15 23:54:09.211415 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 15 23:54:09.211430 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:54:09.211445 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:54:09.211461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:54:09.213570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:54:09.213594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:54:09.213619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:54:09.213635 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:54:09.213652 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:54:09.213667 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:54:09.213683 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:54:09.213699 kernel: CPU topo: Max. threads per core: 2 Jul 15 23:54:09.213717 kernel: CPU topo: Num. cores per package: 1 Jul 15 23:54:09.213734 kernel: CPU topo: Num. threads per package: 2 Jul 15 23:54:09.213751 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 15 23:54:09.213767 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 15 23:54:09.213786 kernel: Booting paravirtualized kernel on KVM Jul 15 23:54:09.213804 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:54:09.213821 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 15 23:54:09.213838 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 15 23:54:09.213855 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 15 23:54:09.213871 kernel: pcpu-alloc: [0] 0 1 Jul 15 23:54:09.213888 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:54:09.213905 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:54:09.213924 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:54:09.213946 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:54:09.213963 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 15 23:54:09.213979 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:54:09.213997 kernel: Fallback order for Node 0: 0 Jul 15 23:54:09.214013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jul 15 23:54:09.214029 kernel: Policy zone: Normal Jul 15 23:54:09.214045 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:54:09.214070 kernel: software IO TLB: area num 2. Jul 15 23:54:09.214113 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 15 23:54:09.214131 kernel: Kernel/User page tables isolation: enabled Jul 15 23:54:09.214146 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:54:09.214166 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:54:09.214189 kernel: Dynamic Preempt: voluntary Jul 15 23:54:09.214212 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:54:09.214228 kernel: rcu: RCU event tracing is enabled. Jul 15 23:54:09.214245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 15 23:54:09.214264 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:54:09.214289 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:54:09.214306 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:54:09.214322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:54:09.214338 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 15 23:54:09.214355 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:54:09.214372 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:54:09.214390 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 15 23:54:09.214411 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 15 23:54:09.214427 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:54:09.214443 kernel: Console: colour dummy device 80x25 Jul 15 23:54:09.214461 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:54:09.214513 kernel: ACPI: Core revision 20240827 Jul 15 23:54:09.214532 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:54:09.214552 kernel: x2apic enabled Jul 15 23:54:09.214571 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:54:09.214590 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 15 23:54:09.214609 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:54:09.214634 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 15 23:54:09.214653 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 15 23:54:09.214673 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 15 23:54:09.214692 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:54:09.214711 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 15 23:54:09.214730 kernel: Spectre V2 : Mitigation: IBRS Jul 15 23:54:09.214749 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:54:09.214768 kernel: RETBleed: Mitigation: IBRS Jul 15 23:54:09.214791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:54:09.214810 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jul 15 23:54:09.214829 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:54:09.214848 kernel: MDS: Mitigation: Clear CPU buffers Jul 15 23:54:09.214867 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 15 23:54:09.214886 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 15 23:54:09.214904 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:54:09.214924 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:54:09.214943 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:54:09.214966 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:54:09.214984 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 23:54:09.215002 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:54:09.215018 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:54:09.215037 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:54:09.215056 kernel: landlock: Up and running. Jul 15 23:54:09.215075 kernel: SELinux: Initializing. Jul 15 23:54:09.215095 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:54:09.215120 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 15 23:54:09.215143 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 15 23:54:09.215162 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 15 23:54:09.215181 kernel: signal: max sigframe size: 1776 Jul 15 23:54:09.215201 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:54:09.215222 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:54:09.215241 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:54:09.215260 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 15 23:54:09.215279 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:54:09.215297 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:54:09.215327 kernel: .... node #0, CPUs: #1 Jul 15 23:54:09.215348 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 15 23:54:09.215369 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 15 23:54:09.215388 kernel: smp: Brought up 1 node, 2 CPUs Jul 15 23:54:09.215408 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 15 23:54:09.215428 kernel: Memory: 7564020K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 290704K reserved, 0K cma-reserved) Jul 15 23:54:09.215448 kernel: devtmpfs: initialized Jul 15 23:54:09.217525 kernel: x86/mm: Memory block size: 128MB Jul 15 23:54:09.217569 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 15 23:54:09.217589 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:54:09.217609 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 15 23:54:09.217628 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:54:09.217647 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:54:09.217666 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:54:09.217685 kernel: audit: type=2000 audit(1752623643.424:1): state=initialized audit_enabled=0 res=1 Jul 15 23:54:09.217703 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:54:09.217722 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:54:09.217744 kernel: cpuidle: using governor menu Jul 15 23:54:09.217762 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:54:09.217780 kernel: dca service started, version 1.12.1 Jul 15 23:54:09.217799 kernel: PCI: Using configuration type 1 for base access Jul 15 23:54:09.217818 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:54:09.217836 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:54:09.217855 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:54:09.217873 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:54:09.217892 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:54:09.217914 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:54:09.217933 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:54:09.217952 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:54:09.217970 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 15 23:54:09.217989 kernel: ACPI: Interpreter enabled Jul 15 23:54:09.218007 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:54:09.218026 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:54:09.218045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:54:09.218063 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 15 23:54:09.218086 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 15 23:54:09.218105 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:54:09.218406 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:54:09.218630 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 15 23:54:09.218818 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 15 23:54:09.218843 kernel: PCI host bridge to bus 0000:00 Jul 15 23:54:09.219020 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:54:09.219194 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:54:09.219357 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:54:09.222637 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 15 23:54:09.222850 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:54:09.223064 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:54:09.223270 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:54:09.223512 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 15 23:54:09.223708 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 15 23:54:09.223902 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jul 15 23:54:09.224091 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 15 23:54:09.224288 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jul 15 23:54:09.225156 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:54:09.225381 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jul 15 23:54:09.225612 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jul 15 23:54:09.225812 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:54:09.226008 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jul 15 23:54:09.226199 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jul 15 23:54:09.226223 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:54:09.226242 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:54:09.226261 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:54:09.226286 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:54:09.226305 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 15 23:54:09.226324 kernel: iommu: Default domain type: Translated Jul 15 23:54:09.226342 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:54:09.226361 kernel: efivars: Registered efivars operations Jul 15 23:54:09.226380 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:54:09.226398 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:54:09.226417 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 15 23:54:09.226435 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 15 23:54:09.226457 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jul 15 23:54:09.226510 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 15 23:54:09.226528 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 15 23:54:09.226544 kernel: vgaarb: loaded Jul 15 23:54:09.226561 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:54:09.226576 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:54:09.226593 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:54:09.226611 kernel: pnp: PnP ACPI init Jul 15 23:54:09.226627 kernel: pnp: PnP ACPI: found 7 devices Jul 15 23:54:09.226649 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:54:09.226665 kernel: NET: Registered PF_INET protocol family Jul 15 23:54:09.226682 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 15 23:54:09.226704 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 15 23:54:09.226720 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:54:09.226737 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:54:09.226754 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 15 23:54:09.226772 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 15 23:54:09.226790 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:54:09.226813 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 15 23:54:09.226832 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:54:09.226850 kernel: NET: Registered PF_XDP protocol family Jul 15 23:54:09.227063 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:54:09.227245 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:54:09.227445 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:54:09.227742 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 15 23:54:09.227950 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 15 23:54:09.227981 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:54:09.228000 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 15 23:54:09.228018 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jul 15 23:54:09.228036 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 15 23:54:09.228054 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 15 23:54:09.228073 kernel: clocksource: Switched to clocksource tsc Jul 15 23:54:09.228091 kernel: Initialise system trusted keyrings Jul 15 23:54:09.228108 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 15 23:54:09.228131 kernel: Key type asymmetric registered Jul 15 23:54:09.228148 kernel: Asymmetric key parser 'x509' registered Jul 15 23:54:09.228167 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:54:09.228187 kernel: io scheduler mq-deadline registered Jul 15 23:54:09.228204 kernel: io scheduler kyber registered Jul 15 23:54:09.228222 kernel: io scheduler bfq registered Jul 15 23:54:09.228240 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:54:09.228258 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 15 23:54:09.228504 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 15 23:54:09.228537 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 15 23:54:09.228732 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 15 23:54:09.228756 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 15 23:54:09.228946 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 15 23:54:09.228969 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:54:09.228988 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:54:09.229007 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 15 23:54:09.229025 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 15 23:54:09.229044 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 15 23:54:09.229245 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 15 23:54:09.229271 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:54:09.229290 kernel: i8042: Warning: Keylock active Jul 15 23:54:09.229309 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:54:09.229328 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:54:09.229542 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 15 23:54:09.229718 kernel: rtc_cmos 00:00: registered as rtc0 Jul 15 23:54:09.229897 kernel: rtc_cmos 00:00: setting system clock to 2025-07-15T23:54:08 UTC (1752623648) Jul 15 23:54:09.230067 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 15 23:54:09.230090 kernel: intel_pstate: CPU model not supported Jul 15 23:54:09.230110 kernel: pstore: Using crash dump compression: deflate Jul 15 23:54:09.230129 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 23:54:09.230147 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:54:09.230166 kernel: Segment Routing with IPv6 Jul 15 23:54:09.230184 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:54:09.230204 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:54:09.230227 kernel: Key type dns_resolver registered Jul 15 23:54:09.230246 kernel: IPI shorthand broadcast: enabled Jul 15 23:54:09.230264 kernel: sched_clock: Marking stable (4205004807, 1007823648)->(5610829717, -398001262) Jul 15 23:54:09.230283 kernel: registered taskstats version 1 Jul 15 23:54:09.230302 kernel: Loading compiled-in X.509 certificates Jul 15 23:54:09.230326 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:54:09.230345 kernel: Demotion targets for Node 0: null Jul 15 23:54:09.230364 kernel: Key type .fscrypt registered Jul 15 23:54:09.230380 kernel: Key type fscrypt-provisioning registered Jul 15 23:54:09.230403 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:54:09.230422 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:54:09.230441 kernel: ima: No architecture policies found Jul 15 23:54:09.230460 kernel: clk: Disabling unused clocks Jul 15 23:54:09.230510 kernel: Warning: unable to open an initial console. Jul 15 23:54:09.230527 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:54:09.230544 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:54:09.230561 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:54:09.230583 kernel: Run /init as init process Jul 15 23:54:09.230599 kernel: with arguments: Jul 15 23:54:09.230615 kernel: /init Jul 15 23:54:09.230633 kernel: with environment: Jul 15 23:54:09.230651 kernel: HOME=/ Jul 15 23:54:09.230668 kernel: TERM=linux Jul 15 23:54:09.230684 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:54:09.230703 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:54:09.230728 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:54:09.230748 systemd[1]: Detected virtualization google. Jul 15 23:54:09.230766 systemd[1]: Detected architecture x86-64. Jul 15 23:54:09.230784 systemd[1]: Running in initrd. Jul 15 23:54:09.230803 systemd[1]: No hostname configured, using default hostname. Jul 15 23:54:09.230823 systemd[1]: Hostname set to . Jul 15 23:54:09.230841 systemd[1]: Initializing machine ID from random generator. Jul 15 23:54:09.230860 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:54:09.230883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:54:09.230918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:54:09.230943 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:54:09.230962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:54:09.230982 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:54:09.231008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:54:09.231030 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:54:09.231051 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:54:09.231078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:54:09.231099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:54:09.231119 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:54:09.231139 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:54:09.231158 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:54:09.231180 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:54:09.231204 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:54:09.231224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:54:09.231245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:54:09.231266 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:54:09.231285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:54:09.231305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:54:09.231324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:54:09.231347 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:54:09.231367 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:54:09.231385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:54:09.231405 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:54:09.231425 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:54:09.231445 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:54:09.232010 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:54:09.232043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:54:09.232065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:54:09.232092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:54:09.232113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:54:09.232134 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:54:09.232196 systemd-journald[207]: Collecting audit messages is disabled. Jul 15 23:54:09.232246 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:54:09.232269 systemd-journald[207]: Journal started Jul 15 23:54:09.232315 systemd-journald[207]: Runtime Journal (/run/log/journal/3e723952d0404d9eb4e99c317bac165b) is 8M, max 148.9M, 140.9M free. Jul 15 23:54:09.202532 systemd-modules-load[208]: Inserted module 'overlay' Jul 15 23:54:09.235643 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:54:09.242017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:54:09.253679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:54:09.264208 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:54:09.270626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:54:09.272736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:54:09.277105 kernel: Bridge firewalling registered Jul 15 23:54:09.276667 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:54:09.278640 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 15 23:54:09.281946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:54:09.288435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:54:09.312875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:54:09.318367 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:54:09.318436 systemd-tmpfiles[223]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:54:09.326790 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:54:09.342020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:54:09.348850 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:54:09.353059 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:54:09.391189 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:54:09.424256 systemd-resolved[246]: Positive Trust Anchors: Jul 15 23:54:09.424278 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:54:09.424358 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:54:09.432478 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 15 23:54:09.436026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:54:09.451772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:54:09.520606 kernel: SCSI subsystem initialized Jul 15 23:54:09.534518 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:54:09.546582 kernel: iscsi: registered transport (tcp) Jul 15 23:54:09.573958 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:54:09.574046 kernel: QLogic iSCSI HBA Driver Jul 15 23:54:09.599210 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:54:09.618856 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:54:09.621754 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:54:09.689123 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:54:09.692299 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:54:09.756512 kernel: raid6: avx2x4 gen() 17898 MB/s Jul 15 23:54:09.773508 kernel: raid6: avx2x2 gen() 17789 MB/s Jul 15 23:54:09.791022 kernel: raid6: avx2x1 gen() 13823 MB/s Jul 15 23:54:09.791113 kernel: raid6: using algorithm avx2x4 gen() 17898 MB/s Jul 15 23:54:09.809088 kernel: raid6: .... xor() 7375 MB/s, rmw enabled Jul 15 23:54:09.809168 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:54:09.832521 kernel: xor: automatically using best checksumming function avx Jul 15 23:54:10.022596 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:54:10.032411 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:54:10.036063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:54:10.069448 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 15 23:54:10.079593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:54:10.086919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:54:10.123600 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 15 23:54:10.158413 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:54:10.165560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:54:10.261512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:54:10.270068 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:54:10.411158 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jul 15 23:54:10.570505 kernel: scsi host0: Virtio SCSI HBA Jul 15 23:54:10.590621 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 15 23:54:10.591537 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:54:10.596494 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 23:54:10.625504 kernel: AES CTR mode by8 optimization enabled Jul 15 23:54:10.644383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:54:10.644646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:54:10.648046 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:54:10.668859 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 15 23:54:10.669267 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 15 23:54:10.673591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:54:10.686626 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 15 23:54:10.686956 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 15 23:54:10.687196 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 15 23:54:10.683433 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:54:10.701197 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:54:10.701291 kernel: GPT:17805311 != 25165823 Jul 15 23:54:10.701316 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:54:10.701338 kernel: GPT:17805311 != 25165823 Jul 15 23:54:10.701358 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:54:10.701378 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:54:10.705345 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 15 23:54:10.724017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:54:10.818519 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jul 15 23:54:10.819576 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:54:10.841410 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jul 15 23:54:10.856410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:54:10.867521 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jul 15 23:54:10.867835 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jul 15 23:54:10.876765 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:54:10.884771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:54:10.884971 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:54:10.895022 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:54:10.911710 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:54:10.926184 disk-uuid[609]: Primary Header is updated. Jul 15 23:54:10.926184 disk-uuid[609]: Secondary Entries is updated. Jul 15 23:54:10.926184 disk-uuid[609]: Secondary Header is updated. Jul 15 23:54:10.942572 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:54:10.951808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:54:10.980497 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:54:11.997212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 15 23:54:11.997323 disk-uuid[610]: The operation has completed successfully. Jul 15 23:54:12.086983 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:54:12.087181 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:54:12.150676 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:54:12.185207 sh[631]: Success Jul 15 23:54:12.212842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:54:12.212994 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:54:12.214875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:54:12.229511 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 15 23:54:12.353696 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:54:12.359601 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:54:12.377692 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:54:12.404538 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:54:12.407508 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (643) Jul 15 23:54:12.411755 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:54:12.411887 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:54:12.411914 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:54:12.452004 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:54:12.453390 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:54:12.460054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:54:12.461942 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:54:12.474344 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:54:12.522676 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (675) Jul 15 23:54:12.528099 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:54:12.528216 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:54:12.528244 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:54:12.543527 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:54:12.545990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:54:12.555736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:54:12.647135 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:54:12.654606 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:54:12.773799 systemd-networkd[812]: lo: Link UP Jul 15 23:54:12.773814 systemd-networkd[812]: lo: Gained carrier Jul 15 23:54:12.777683 systemd-networkd[812]: Enumeration completed Jul 15 23:54:12.777877 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:54:12.780571 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:54:12.780580 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:54:12.786953 systemd-networkd[812]: eth0: Link UP Jul 15 23:54:12.786962 systemd-networkd[812]: eth0: Gained carrier Jul 15 23:54:12.786989 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:54:12.789434 systemd[1]: Reached target network.target - Network. Jul 15 23:54:12.808681 systemd-networkd[812]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:54:12.808710 systemd-networkd[812]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:54:12.861650 ignition[743]: Ignition 2.21.0 Jul 15 23:54:12.861671 ignition[743]: Stage: fetch-offline Jul 15 23:54:12.866434 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:54:12.861774 ignition[743]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:12.874684 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 15 23:54:12.861790 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:12.861994 ignition[743]: parsed url from cmdline: "" Jul 15 23:54:12.862001 ignition[743]: no config URL provided Jul 15 23:54:12.862012 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:54:12.862026 ignition[743]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:54:12.862044 ignition[743]: failed to fetch config: resource requires networking Jul 15 23:54:12.863623 ignition[743]: Ignition finished successfully Jul 15 23:54:12.918027 ignition[822]: Ignition 2.21.0 Jul 15 23:54:12.918046 ignition[822]: Stage: fetch Jul 15 23:54:12.918346 ignition[822]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:12.918364 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:12.918557 ignition[822]: parsed url from cmdline: "" Jul 15 23:54:12.918565 ignition[822]: no config URL provided Jul 15 23:54:12.918576 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:54:12.918592 ignition[822]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:54:12.937367 unknown[822]: fetched base config from "system" Jul 15 23:54:12.918642 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 15 23:54:12.937381 unknown[822]: fetched base config from "system" Jul 15 23:54:12.925701 ignition[822]: GET result: OK Jul 15 23:54:12.937392 unknown[822]: fetched user config from "gcp" Jul 15 23:54:12.925897 ignition[822]: parsing config with SHA512: 8492c1af3dea3a3b3c8f0a441f00285351fdbcb531f1c75ae54c15c5bc02c9bc4bd783a5c88dd48507bc4455195f05ee2b0a17e29fad1ac57137bbae5c7bf644 Jul 15 23:54:12.941405 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 15 23:54:12.938204 ignition[822]: fetch: fetch complete Jul 15 23:54:12.952141 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:54:12.938216 ignition[822]: fetch: fetch passed Jul 15 23:54:12.938390 ignition[822]: Ignition finished successfully Jul 15 23:54:12.999656 ignition[828]: Ignition 2.21.0 Jul 15 23:54:12.999674 ignition[828]: Stage: kargs Jul 15 23:54:13.003259 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:54:12.999916 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:13.009025 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:54:12.999935 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:13.001071 ignition[828]: kargs: kargs passed Jul 15 23:54:13.001143 ignition[828]: Ignition finished successfully Jul 15 23:54:13.046995 ignition[835]: Ignition 2.21.0 Jul 15 23:54:13.047013 ignition[835]: Stage: disks Jul 15 23:54:13.047315 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:13.052279 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:54:13.047334 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:13.057031 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:54:13.049885 ignition[835]: disks: disks passed Jul 15 23:54:13.061586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:54:13.050088 ignition[835]: Ignition finished successfully Jul 15 23:54:13.068837 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:54:13.072939 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:54:13.081689 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:54:13.087390 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:54:13.137416 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 15 23:54:13.147052 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:54:13.156444 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:54:13.372501 kernel: EXT4-fs (sda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:54:13.374216 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:54:13.378328 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:54:13.388072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:54:13.404501 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:54:13.408991 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:54:13.409086 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:54:13.409154 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:54:13.433630 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (853) Jul 15 23:54:13.433672 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:54:13.433699 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:54:13.433724 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:54:13.434697 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:54:13.437127 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:54:13.447816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:54:13.574298 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:54:13.584571 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:54:13.591055 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:54:13.598808 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:54:13.763973 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:54:13.766794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:54:13.773561 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:54:13.790637 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:54:13.794647 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:54:13.830402 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:54:13.833731 ignition[965]: INFO : Ignition 2.21.0 Jul 15 23:54:13.833731 ignition[965]: INFO : Stage: mount Jul 15 23:54:13.833731 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:13.833731 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:13.850053 ignition[965]: INFO : mount: mount passed Jul 15 23:54:13.850053 ignition[965]: INFO : Ignition finished successfully Jul 15 23:54:13.839680 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:54:13.847446 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:54:13.875643 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:54:13.909535 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (977) Jul 15 23:54:13.912674 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:54:13.912775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:54:13.912801 kernel: BTRFS info (device sda6): using free-space-tree Jul 15 23:54:13.924366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:54:13.968598 ignition[994]: INFO : Ignition 2.21.0 Jul 15 23:54:13.968598 ignition[994]: INFO : Stage: files Jul 15 23:54:13.974673 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:13.974673 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:13.974673 ignition[994]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:54:13.974673 ignition[994]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:54:13.974673 ignition[994]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:54:13.991670 ignition[994]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:54:13.991670 ignition[994]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:54:13.991670 ignition[994]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:54:13.991670 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 23:54:13.991670 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 15 23:54:13.980533 unknown[994]: wrote ssh authorized keys file for user: core Jul 15 23:54:14.037744 systemd-networkd[812]: eth0: Gained IPv6LL Jul 15 23:54:14.115682 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:54:14.358539 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 15 23:54:14.358539 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:54:14.367678 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 15 23:54:14.847681 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 15 23:54:15.293375 ignition[994]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 15 23:54:15.293375 ignition[994]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 15 23:54:15.304823 ignition[994]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:54:15.304823 ignition[994]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:54:15.304823 ignition[994]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 15 23:54:15.304823 ignition[994]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:54:15.304823 ignition[994]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:54:15.304823 ignition[994]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:54:15.304823 ignition[994]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:54:15.304823 ignition[994]: INFO : files: files passed Jul 15 23:54:15.304823 ignition[994]: INFO : Ignition finished successfully Jul 15 23:54:15.302671 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:54:15.309349 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:54:15.331064 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:54:15.341154 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:54:15.341323 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:54:15.376566 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:54:15.381705 initrd-setup-root-after-ignition[1024]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:54:15.381975 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:54:15.381707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:54:15.388018 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:54:15.396172 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:54:15.469545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:54:15.469731 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:54:15.476146 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:54:15.478937 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:54:15.483988 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:54:15.485747 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:54:15.524928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:54:15.533647 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:54:15.563743 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:54:15.568014 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:54:15.571189 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:54:15.577401 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:54:15.578395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:54:15.590489 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:54:15.590956 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:54:15.598872 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:54:15.605869 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:54:15.611079 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:54:15.618933 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:54:15.626938 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:54:15.632898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:54:15.639958 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:54:15.644234 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:54:15.649151 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:54:15.653978 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:54:15.654328 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:54:15.664191 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:54:15.668307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:54:15.679788 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:54:15.679985 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:54:15.685852 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:54:15.686200 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:54:15.696134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:54:15.696445 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:54:15.699139 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:54:15.699375 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:54:15.705928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:54:15.715689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:54:15.715979 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:54:15.727914 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:54:15.731965 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:54:15.732680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:54:15.741761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:54:15.742214 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:54:15.755868 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:54:15.756025 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:54:15.762642 ignition[1048]: INFO : Ignition 2.21.0 Jul 15 23:54:15.762642 ignition[1048]: INFO : Stage: umount Jul 15 23:54:15.762642 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:54:15.762642 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 15 23:54:15.781648 ignition[1048]: INFO : umount: umount passed Jul 15 23:54:15.781648 ignition[1048]: INFO : Ignition finished successfully Jul 15 23:54:15.771497 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:54:15.771672 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:54:15.777385 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:54:15.778678 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:54:15.778832 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:54:15.783783 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:54:15.783879 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:54:15.788934 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 15 23:54:15.789116 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 15 23:54:15.795719 systemd[1]: Stopped target network.target - Network. Jul 15 23:54:15.799798 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:54:15.799906 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:54:15.803016 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:54:15.808248 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:54:15.808727 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:54:15.812885 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:54:15.818926 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:54:15.824062 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:54:15.824137 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:54:15.828153 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:54:15.828327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:54:15.833055 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:54:15.833161 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:54:15.837183 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:54:15.837397 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:54:15.842510 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:54:15.851278 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:54:15.860578 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:54:15.860740 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:54:15.872379 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:54:15.872718 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:54:15.872854 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:54:15.880741 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:54:15.881065 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:54:15.881189 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:54:15.886837 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:54:15.890936 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:54:15.891030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:54:15.895018 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:54:15.895149 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:54:15.904213 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:54:15.911202 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:54:15.911587 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:54:15.918918 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:54:15.919023 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:54:15.925927 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:54:15.926023 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:54:15.931993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:54:15.932101 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:54:15.939111 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:54:15.948369 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:54:15.948463 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:54:15.957380 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:54:15.957616 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:54:15.958233 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:54:15.958310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:54:15.962263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:54:15.962483 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:54:15.968919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:54:15.969008 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:54:15.980793 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:54:15.980903 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:54:15.988347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:54:15.988597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:54:15.999536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:54:16.006657 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:54:16.007034 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:54:16.015087 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:54:16.015209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:54:16.029089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:54:16.117669 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 15 23:54:16.029195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:54:16.038168 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:54:16.038259 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:54:16.038309 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:54:16.038986 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:54:16.039121 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:54:16.044309 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:54:16.044428 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:54:16.050036 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:54:16.058489 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:54:16.084073 systemd[1]: Switching root. Jul 15 23:54:16.155659 systemd-journald[207]: Journal stopped Jul 15 23:54:18.311518 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:54:18.311588 kernel: SELinux: policy capability open_perms=1 Jul 15 23:54:18.311610 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:54:18.311626 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:54:18.311645 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:54:18.311663 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:54:18.311690 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:54:18.311710 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:54:18.311870 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:54:18.311891 kernel: audit: type=1403 audit(1752623656.731:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:54:18.311913 systemd[1]: Successfully loaded SELinux policy in 57.183ms. Jul 15 23:54:18.311935 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.280ms. Jul 15 23:54:18.312078 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:54:18.312105 systemd[1]: Detected virtualization google. Jul 15 23:54:18.312126 systemd[1]: Detected architecture x86-64. Jul 15 23:54:18.312145 systemd[1]: Detected first boot. Jul 15 23:54:18.312168 systemd[1]: Initializing machine ID from random generator. Jul 15 23:54:18.312189 zram_generator::config[1091]: No configuration found. Jul 15 23:54:18.312216 kernel: Guest personality initialized and is inactive Jul 15 23:54:18.312236 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:54:18.312255 kernel: Initialized host personality Jul 15 23:54:18.312275 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:54:18.312296 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:54:18.312318 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:54:18.312338 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:54:18.312363 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:54:18.312384 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:54:18.312406 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:54:18.312427 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:54:18.312450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:54:18.314528 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:54:18.314572 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:54:18.314604 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:54:18.314628 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:54:18.314647 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:54:18.314667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:54:18.314689 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:54:18.314708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:54:18.314727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:54:18.314747 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:54:18.314791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:54:18.314818 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:54:18.314840 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:54:18.314862 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:54:18.314883 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:54:18.314905 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:54:18.314926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:54:18.314947 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:54:18.314974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:54:18.314996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:54:18.315019 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:54:18.315042 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:54:18.315069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:54:18.315094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:54:18.315118 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:54:18.315148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:54:18.315170 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:54:18.315192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:54:18.315213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:54:18.315237 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:54:18.315260 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:54:18.315286 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:54:18.315309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:18.315331 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:54:18.315353 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:54:18.315376 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:54:18.315400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:54:18.315422 systemd[1]: Reached target machines.target - Containers. Jul 15 23:54:18.315444 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:54:18.315515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:54:18.315540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:54:18.315562 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:54:18.315584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:54:18.315606 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:54:18.315628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:54:18.315651 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:54:18.315673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:54:18.315697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:54:18.315725 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:54:18.315747 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:54:18.315769 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:54:18.315800 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:54:18.315824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:54:18.315846 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:54:18.315868 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:54:18.315890 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:54:18.315917 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:54:18.315939 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:54:18.315962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:54:18.315985 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:54:18.316008 systemd[1]: Stopped verity-setup.service. Jul 15 23:54:18.316030 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:18.316053 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:54:18.316075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:54:18.316102 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:54:18.316123 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:54:18.316146 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:54:18.316169 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:54:18.316191 kernel: ACPI: bus type drm_connector registered Jul 15 23:54:18.316213 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:54:18.316235 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:54:18.316258 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:54:18.316280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:54:18.316309 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:54:18.316330 kernel: loop: module loaded Jul 15 23:54:18.316405 systemd-journald[1162]: Collecting audit messages is disabled. Jul 15 23:54:18.316454 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:54:18.316499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:54:18.316523 kernel: fuse: init (API version 7.41) Jul 15 23:54:18.316544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:54:18.316568 systemd-journald[1162]: Journal started Jul 15 23:54:18.316612 systemd-journald[1162]: Runtime Journal (/run/log/journal/520cd1c3d66c4d25bbf9a0d1a36a5083) is 8M, max 148.9M, 140.9M free. Jul 15 23:54:17.721323 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:54:17.736632 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 15 23:54:17.737264 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:54:18.322982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:54:18.327619 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:54:18.334983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:54:18.335312 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:54:18.338140 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:54:18.338446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:54:18.344573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:54:18.348161 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:54:18.356782 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:54:18.360383 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:54:18.364342 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:54:18.385429 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:54:18.392637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:54:18.399621 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:54:18.402636 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:54:18.402705 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:54:18.409674 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:54:18.427831 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:54:18.431867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:54:18.436716 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:54:18.443313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:54:18.447841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:54:18.451362 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:54:18.454679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:54:18.457419 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:54:18.469635 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:54:18.478871 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:54:18.484087 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:54:18.484454 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:54:18.524898 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:54:18.529184 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:54:18.547993 kernel: loop0: detected capacity change from 0 to 224512 Jul 15 23:54:18.540397 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:54:18.550441 systemd-journald[1162]: Time spent on flushing to /var/log/journal/520cd1c3d66c4d25bbf9a0d1a36a5083 is 134.842ms for 957 entries. Jul 15 23:54:18.550441 systemd-journald[1162]: System Journal (/var/log/journal/520cd1c3d66c4d25bbf9a0d1a36a5083) is 8M, max 584.8M, 576.8M free. Jul 15 23:54:18.716113 systemd-journald[1162]: Received client request to flush runtime journal. Jul 15 23:54:18.716190 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:54:18.716221 kernel: loop1: detected capacity change from 0 to 146240 Jul 15 23:54:18.634542 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:54:18.662903 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:54:18.674293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:54:18.722062 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:54:18.729323 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:54:18.738690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:54:18.742290 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:54:18.807202 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jul 15 23:54:18.808242 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jul 15 23:54:18.815589 kernel: loop2: detected capacity change from 0 to 113872 Jul 15 23:54:18.821318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:54:18.896529 kernel: loop3: detected capacity change from 0 to 52072 Jul 15 23:54:18.993545 kernel: loop4: detected capacity change from 0 to 224512 Jul 15 23:54:19.038538 kernel: loop5: detected capacity change from 0 to 146240 Jul 15 23:54:19.094518 kernel: loop6: detected capacity change from 0 to 113872 Jul 15 23:54:19.153534 kernel: loop7: detected capacity change from 0 to 52072 Jul 15 23:54:19.187214 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jul 15 23:54:19.189554 (sd-merge)[1237]: Merged extensions into '/usr'. Jul 15 23:54:19.208703 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:54:19.208882 systemd[1]: Reloading... Jul 15 23:54:19.365508 zram_generator::config[1259]: No configuration found. Jul 15 23:54:19.620412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:54:19.772505 ldconfig[1208]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:54:19.855826 systemd[1]: Reloading finished in 646 ms. Jul 15 23:54:19.872655 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:54:19.877205 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:54:19.893712 systemd[1]: Starting ensure-sysext.service... Jul 15 23:54:19.899708 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:54:19.940768 systemd[1]: Reload requested from client PID 1303 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:54:19.940938 systemd[1]: Reloading... Jul 15 23:54:19.983733 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:54:19.985077 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:54:19.986726 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:54:19.987381 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:54:19.991619 systemd-tmpfiles[1304]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:54:19.993225 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jul 15 23:54:19.994161 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jul 15 23:54:20.009016 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:54:20.010621 systemd-tmpfiles[1304]: Skipping /boot Jul 15 23:54:20.056815 systemd-tmpfiles[1304]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:54:20.057016 systemd-tmpfiles[1304]: Skipping /boot Jul 15 23:54:20.140516 zram_generator::config[1334]: No configuration found. Jul 15 23:54:20.262753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:54:20.382590 systemd[1]: Reloading finished in 440 ms. Jul 15 23:54:20.405251 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:54:20.431345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:54:20.452306 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:54:20.469448 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:54:20.489919 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:54:20.505209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:54:20.519109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:54:20.532542 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:54:20.553413 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:20.554662 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:54:20.558240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:54:20.572909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:54:20.589196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:54:20.598834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:54:20.599874 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:54:20.605889 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:54:20.614594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:20.627903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:20.628373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:54:20.628824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:54:20.629068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:54:20.629334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:20.638857 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Jul 15 23:54:20.647719 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:54:20.649859 augenrules[1402]: No rules Jul 15 23:54:20.659515 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:54:20.659906 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:54:20.671583 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:54:20.685260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:54:20.686263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:54:20.697522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:54:20.698643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:54:20.711303 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:54:20.711665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:54:20.722083 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:54:20.745270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:54:20.758741 systemd[1]: Finished ensure-sysext.service. Jul 15 23:54:20.767330 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:54:20.800911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:20.801268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:54:20.805668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:54:20.823614 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 15 23:54:20.831784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:54:20.831864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:54:20.839041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:54:20.848687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:54:20.848803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:54:20.848860 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:54:20.862459 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:54:20.864673 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:54:20.864751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:54:20.876154 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:54:20.878673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:54:20.953498 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:54:20.971109 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 15 23:54:20.990005 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jul 15 23:54:21.063249 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jul 15 23:54:21.063340 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 15 23:54:21.076738 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:54:21.114098 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jul 15 23:54:21.268516 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:54:21.283022 systemd-resolved[1384]: Positive Trust Anchors: Jul 15 23:54:21.283049 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:54:21.283114 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:54:21.298850 systemd-resolved[1384]: Defaulting to hostname 'linux'. Jul 15 23:54:21.304818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:54:21.311605 systemd-networkd[1451]: lo: Link UP Jul 15 23:54:21.312134 systemd-networkd[1451]: lo: Gained carrier Jul 15 23:54:21.316395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:54:21.317267 systemd-networkd[1451]: Enumeration completed Jul 15 23:54:21.318046 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:54:21.318054 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:54:21.319169 systemd-networkd[1451]: eth0: Link UP Jul 15 23:54:21.319488 systemd-networkd[1451]: eth0: Gained carrier Jul 15 23:54:21.319517 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:54:21.332534 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 23:54:21.336719 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:54:21.339643 systemd-networkd[1451]: eth0: Overlong DHCP hostname received, shortened from 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e.c.flatcar-212911.internal' to 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:54:21.339675 systemd-networkd[1451]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 15 23:54:21.359921 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 15 23:54:21.360418 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:54:21.360860 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:54:21.372727 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:54:21.383670 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:54:21.395229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:54:21.410539 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 15 23:54:21.410695 kernel: ACPI: button: Sleep Button [SLPF] Jul 15 23:54:21.416920 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:54:21.428701 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:54:21.439721 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:54:21.439795 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:54:21.449340 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:54:21.460031 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:54:21.471946 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:54:21.486116 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:54:21.497060 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:54:21.508689 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:54:21.524773 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:54:21.535525 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:54:21.546764 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:54:21.556002 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:54:21.584506 kernel: EDAC MC: Ver: 3.0.0 Jul 15 23:54:21.604445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 15 23:54:21.627786 systemd[1]: Reached target network.target - Network. Jul 15 23:54:21.635668 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:54:21.644679 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:54:21.652774 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:54:21.652835 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:54:21.655769 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:54:21.667983 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 15 23:54:21.672826 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:54:21.688663 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:54:21.702370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:54:21.729615 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:54:21.743698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:54:21.747641 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:54:21.766821 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:54:21.772219 jq[1501]: false Jul 15 23:54:21.778799 systemd[1]: Started ntpd.service - Network Time Service. Jul 15 23:54:21.791735 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:54:21.806858 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:54:21.829433 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:54:21.835739 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing passwd entry cache Jul 15 23:54:21.833895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:54:21.833109 oslogin_cache_refresh[1503]: Refreshing passwd entry cache Jul 15 23:54:21.863834 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:54:21.876807 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:54:21.882556 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting users, quitting Jul 15 23:54:21.882556 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:54:21.882556 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing group entry cache Jul 15 23:54:21.880619 oslogin_cache_refresh[1503]: Failure getting users, quitting Jul 15 23:54:21.880653 oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:54:21.880726 oslogin_cache_refresh[1503]: Refreshing group entry cache Jul 15 23:54:21.887183 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting groups, quitting Jul 15 23:54:21.889730 oslogin_cache_refresh[1503]: Failure getting groups, quitting Jul 15 23:54:21.895635 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:54:21.893957 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:54:21.891602 oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:54:21.907514 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 15 23:54:21.908430 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:54:21.912383 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:54:21.923555 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:54:21.929206 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:54:21.929791 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:54:21.930123 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:54:21.932052 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:54:21.933099 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:54:21.936286 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:54:21.936712 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:54:21.962961 jq[1527]: true Jul 15 23:54:21.981660 coreos-metadata[1498]: Jul 15 23:54:21.967 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 15 23:54:21.982094 extend-filesystems[1502]: Found /dev/sda6 Jul 15 23:54:21.990704 coreos-metadata[1498]: Jul 15 23:54:21.988 INFO Fetch successful Jul 15 23:54:21.990704 coreos-metadata[1498]: Jul 15 23:54:21.988 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 15 23:54:22.005970 jq[1536]: true Jul 15 23:54:22.006222 coreos-metadata[1498]: Jul 15 23:54:21.997 INFO Fetch successful Jul 15 23:54:22.006222 coreos-metadata[1498]: Jul 15 23:54:21.998 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 15 23:54:22.006222 coreos-metadata[1498]: Jul 15 23:54:21.998 INFO Fetch successful Jul 15 23:54:22.006222 coreos-metadata[1498]: Jul 15 23:54:21.998 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 15 23:54:22.006222 coreos-metadata[1498]: Jul 15 23:54:21.999 INFO Fetch successful Jul 15 23:54:22.019827 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:54:22.023971 extend-filesystems[1502]: Found /dev/sda9 Jul 15 23:54:22.049879 extend-filesystems[1502]: Checking size of /dev/sda9 Jul 15 23:54:22.047023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:54:22.061659 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:54:22.062569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:54:22.106863 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:54:22.139525 extend-filesystems[1502]: Resized partition /dev/sda9 Jul 15 23:54:22.147665 extend-filesystems[1566]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:54:22.155925 tar[1529]: linux-amd64/LICENSE Jul 15 23:54:22.155925 tar[1529]: linux-amd64/helm Jul 15 23:54:22.190803 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 15 23:54:22.205534 update_engine[1521]: I20250715 23:54:22.202013 1521 main.cc:92] Flatcar Update Engine starting Jul 15 23:54:22.258103 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 15 23:54:22.270144 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 15 23:54:22.280173 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:54:22.285405 bash[1572]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:54:22.285101 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:54:22.285812 extend-filesystems[1566]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 15 23:54:22.285812 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 15 23:54:22.285812 extend-filesystems[1566]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 15 23:54:22.287602 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:54:22.342118 extend-filesystems[1502]: Resized filesystem in /dev/sda9 Jul 15 23:54:22.309139 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:54:22.338556 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:54:22.370600 systemd[1]: Starting sshkeys.service... Jul 15 23:54:22.486176 systemd-networkd[1451]: eth0: Gained IPv6LL Jul 15 23:54:22.504779 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:54:22.505106 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:54:22.510823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:54:22.520940 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:54:22.527925 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jul 15 23:54:22.621081 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 15 23:54:22.631777 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 15 23:54:22.647114 dbus-daemon[1499]: [system] SELinux support is enabled Jul 15 23:54:22.647639 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:54:22.655559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:54:22.655800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:54:22.656048 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:54:22.656252 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:54:22.720876 dbus-daemon[1499]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1451 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 15 23:54:22.731606 init.sh[1588]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 15 23:54:22.731606 init.sh[1588]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 15 23:54:22.735556 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 15 23:54:22.740764 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:54:22.745971 init.sh[1588]: + /usr/bin/google_instance_setup Jul 15 23:54:22.746023 update_engine[1521]: I20250715 23:54:22.743685 1521 update_check_scheduler.cc:74] Next update check in 3m26s Jul 15 23:54:22.768747 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:54:22.777605 ntpd[1507]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: ntpd 4.2.8p17@1.4004-o Tue Jul 15 21:30:22 UTC 2025 (1): Starting Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: ---------------------------------------------------- Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: corporation. Support and training for ntp-4 are Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: available at https://www.nwtime.org/support Jul 15 23:54:22.781205 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: ---------------------------------------------------- Jul 15 23:54:22.777654 ntpd[1507]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 15 23:54:22.777670 ntpd[1507]: ---------------------------------------------------- Jul 15 23:54:22.777684 ntpd[1507]: ntp-4 is maintained by Network Time Foundation, Jul 15 23:54:22.777697 ntpd[1507]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 15 23:54:22.777710 ntpd[1507]: corporation. Support and training for ntp-4 are Jul 15 23:54:22.777723 ntpd[1507]: available at https://www.nwtime.org/support Jul 15 23:54:22.777736 ntpd[1507]: ---------------------------------------------------- Jul 15 23:54:22.801094 ntpd[1507]: proto: precision = 0.076 usec (-24) Jul 15 23:54:22.811397 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: proto: precision = 0.076 usec (-24) Jul 15 23:54:22.812177 ntpd[1507]: basedate set to 2025-07-03 Jul 15 23:54:22.815058 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: basedate set to 2025-07-03 Jul 15 23:54:22.815058 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: gps base set to 2025-07-06 (week 2374) Jul 15 23:54:22.812208 ntpd[1507]: gps base set to 2025-07-06 (week 2374) Jul 15 23:54:22.825905 ntpd[1507]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listen and drop on 0 v6wildcard [::]:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listen normally on 3 eth0 10.128.0.4:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listen normally on 4 lo [::1]:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4%2]:123 Jul 15 23:54:22.844895 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: Listening on routing socket on fd #22 for interface updates Jul 15 23:54:22.843517 ntpd[1507]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 15 23:54:22.843787 ntpd[1507]: Listen normally on 2 lo 127.0.0.1:123 Jul 15 23:54:22.843846 ntpd[1507]: Listen normally on 3 eth0 10.128.0.4:123 Jul 15 23:54:22.843911 ntpd[1507]: Listen normally on 4 lo [::1]:123 Jul 15 23:54:22.843981 ntpd[1507]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4%2]:123 Jul 15 23:54:22.844031 ntpd[1507]: Listening on routing socket on fd #22 for interface updates Jul 15 23:54:22.861914 ntpd[1507]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:54:22.865222 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:54:22.865222 ntpd[1507]: 15 Jul 23:54:22 ntpd[1507]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:54:22.861985 ntpd[1507]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 15 23:54:22.881269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:54:22.887991 coreos-metadata[1593]: Jul 15 23:54:22.886 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 15 23:54:22.907533 coreos-metadata[1593]: Jul 15 23:54:22.902 INFO Fetch failed with 404: resource not found Jul 15 23:54:22.907533 coreos-metadata[1593]: Jul 15 23:54:22.902 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 15 23:54:22.907533 coreos-metadata[1593]: Jul 15 23:54:22.904 INFO Fetch successful Jul 15 23:54:22.907533 coreos-metadata[1593]: Jul 15 23:54:22.904 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 15 23:54:22.906578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:54:22.911503 coreos-metadata[1593]: Jul 15 23:54:22.908 INFO Fetch failed with 404: resource not found Jul 15 23:54:22.911503 coreos-metadata[1593]: Jul 15 23:54:22.908 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 15 23:54:22.912092 coreos-metadata[1593]: Jul 15 23:54:22.911 INFO Fetch failed with 404: resource not found Jul 15 23:54:22.912092 coreos-metadata[1593]: Jul 15 23:54:22.911 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 15 23:54:22.920649 coreos-metadata[1593]: Jul 15 23:54:22.918 INFO Fetch successful Jul 15 23:54:22.926987 unknown[1593]: wrote ssh authorized keys file for user: core Jul 15 23:54:23.038129 update-ssh-keys[1614]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:54:23.039699 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 15 23:54:23.059481 systemd[1]: Finished sshkeys.service. Jul 15 23:54:23.192274 systemd-logind[1512]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:54:23.192319 systemd-logind[1512]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 15 23:54:23.192355 systemd-logind[1512]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:54:23.192908 systemd-logind[1512]: New seat seat0. Jul 15 23:54:23.194155 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:54:23.266225 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:54:23.329925 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:54:23.360325 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 15 23:54:23.366738 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 15 23:54:23.375601 dbus-daemon[1499]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1601 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 15 23:54:23.385117 systemd[1]: Starting polkit.service - Authorization Manager... Jul 15 23:54:23.434682 containerd[1534]: time="2025-07-15T23:54:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:54:23.440996 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:54:23.448681 containerd[1534]: time="2025-07-15T23:54:23.445165588Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:54:23.461925 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:54:23.520351 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:54:23.537620 systemd[1]: Started sshd@0-10.128.0.4:22-139.178.89.65:41234.service - OpenSSH per-connection server daemon (139.178.89.65:41234). Jul 15 23:54:23.550753 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:54:23.551182 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:54:23.573622 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:54:23.578398 containerd[1534]: time="2025-07-15T23:54:23.577964184Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.663µs" Jul 15 23:54:23.578398 containerd[1534]: time="2025-07-15T23:54:23.578011333Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:54:23.578398 containerd[1534]: time="2025-07-15T23:54:23.578039039Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:54:23.578398 containerd[1534]: time="2025-07-15T23:54:23.578285196Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:54:23.578398 containerd[1534]: time="2025-07-15T23:54:23.578323912Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.578363222Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.578863724Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.578885087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.579218327Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.579243995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.579261263Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.579278445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:54:23.580957 containerd[1534]: time="2025-07-15T23:54:23.579409351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:54:23.586666 containerd[1534]: time="2025-07-15T23:54:23.586613027Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:54:23.591126 containerd[1534]: time="2025-07-15T23:54:23.588638352Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:54:23.591126 containerd[1534]: time="2025-07-15T23:54:23.588772252Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:54:23.591126 containerd[1534]: time="2025-07-15T23:54:23.588833124Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:54:23.591126 containerd[1534]: time="2025-07-15T23:54:23.589154244Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:54:23.591126 containerd[1534]: time="2025-07-15T23:54:23.589278766Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:54:23.602612 containerd[1534]: time="2025-07-15T23:54:23.602457637Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:54:23.602875 containerd[1534]: time="2025-07-15T23:54:23.602845994Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605025067Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605097977Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605121606Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605163765Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605182234Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605200718Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605234666Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605251730Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605267220Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605308905Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605584339Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605624587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:54:23.605813 containerd[1534]: time="2025-07-15T23:54:23.605693587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:54:23.607996 containerd[1534]: time="2025-07-15T23:54:23.606336961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:54:23.607996 containerd[1534]: time="2025-07-15T23:54:23.606721191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:54:23.607996 containerd[1534]: time="2025-07-15T23:54:23.607165979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:54:23.607996 containerd[1534]: time="2025-07-15T23:54:23.607661581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.612758212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.612850716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.612882067Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.612900807Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.613010918Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.613035598Z" level=info msg="Start snapshots syncer" Jul 15 23:54:23.615759 containerd[1534]: time="2025-07-15T23:54:23.613072078Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:54:23.617591 containerd[1534]: time="2025-07-15T23:54:23.615708820Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:54:23.617591 containerd[1534]: time="2025-07-15T23:54:23.616257885Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.619633769Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.619887647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.619932377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.619960191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.619977205Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.619996192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.620013931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.620030982Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.620089147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.620112788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:54:23.620350 containerd[1534]: time="2025-07-15T23:54:23.620140856Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629646076Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629722759Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629742188Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629761180Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629775539Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629791859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629809201Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629833946Z" level=info msg="runtime interface created" Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629842070Z" level=info msg="created NRI interface" Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629855782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629881629Z" level=info msg="Connect containerd service" Jul 15 23:54:23.633520 containerd[1534]: time="2025-07-15T23:54:23.629949522Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:54:23.640622 containerd[1534]: time="2025-07-15T23:54:23.638066229Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:54:23.683245 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:54:23.703233 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:54:23.714020 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:54:23.725296 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:54:23.974942 polkitd[1628]: Started polkitd version 126 Jul 15 23:54:24.020445 polkitd[1628]: Loading rules from directory /etc/polkit-1/rules.d Jul 15 23:54:24.022153 polkitd[1628]: Loading rules from directory /run/polkit-1/rules.d Jul 15 23:54:24.022222 polkitd[1628]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:54:24.023651 polkitd[1628]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 15 23:54:24.023713 polkitd[1628]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 15 23:54:24.023781 polkitd[1628]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 15 23:54:24.027606 polkitd[1628]: Finished loading, compiling and executing 2 rules Jul 15 23:54:24.034049 systemd[1]: Started polkit.service - Authorization Manager. Jul 15 23:54:24.035735 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 15 23:54:24.039708 polkitd[1628]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 15 23:54:24.120310 systemd-hostnamed[1601]: Hostname set to (transient) Jul 15 23:54:24.121985 systemd-resolved[1384]: System hostname changed to 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e'. Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170438765Z" level=info msg="Start subscribing containerd event" Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170557516Z" level=info msg="Start recovering state" Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170738388Z" level=info msg="Start event monitor" Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170763660Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170779517Z" level=info msg="Start streaming server" Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170829402Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170844066Z" level=info msg="runtime interface starting up..." Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170855506Z" level=info msg="starting plugins..." Jul 15 23:54:24.171762 containerd[1534]: time="2025-07-15T23:54:24.170882649Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:54:24.172382 containerd[1534]: time="2025-07-15T23:54:24.172344971Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:54:24.172698 containerd[1534]: time="2025-07-15T23:54:24.172669878Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:54:24.173045 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:54:24.173516 containerd[1534]: time="2025-07-15T23:54:24.173491179Z" level=info msg="containerd successfully booted in 0.745566s" Jul 15 23:54:24.229028 sshd[1636]: Accepted publickey for core from 139.178.89.65 port 41234 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:24.234813 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:24.260106 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:54:24.273929 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:54:24.314714 systemd-logind[1512]: New session 1 of user core. Jul 15 23:54:24.346701 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:54:24.363800 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:54:24.401606 (systemd)[1669]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:54:24.407834 systemd-logind[1512]: New session c1 of user core. Jul 15 23:54:24.484782 tar[1529]: linux-amd64/README.md Jul 15 23:54:24.526390 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:54:24.636408 instance-setup[1605]: INFO Running google_set_multiqueue. Jul 15 23:54:24.660763 instance-setup[1605]: INFO Set channels for eth0 to 2. Jul 15 23:54:24.665672 instance-setup[1605]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jul 15 23:54:24.668308 instance-setup[1605]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jul 15 23:54:24.669164 instance-setup[1605]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jul 15 23:54:24.671617 instance-setup[1605]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jul 15 23:54:24.672666 instance-setup[1605]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jul 15 23:54:24.674917 instance-setup[1605]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jul 15 23:54:24.675822 instance-setup[1605]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jul 15 23:54:24.678151 instance-setup[1605]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jul 15 23:54:24.690040 instance-setup[1605]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:54:24.695632 instance-setup[1605]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 15 23:54:24.698380 instance-setup[1605]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 15 23:54:24.698433 instance-setup[1605]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 15 23:54:24.728145 init.sh[1588]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 15 23:54:24.796311 systemd[1669]: Queued start job for default target default.target. Jul 15 23:54:24.805398 systemd[1669]: Created slice app.slice - User Application Slice. Jul 15 23:54:24.806699 systemd[1669]: Reached target paths.target - Paths. Jul 15 23:54:24.807139 systemd[1669]: Reached target timers.target - Timers. Jul 15 23:54:24.811760 systemd[1669]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:54:24.844910 systemd[1669]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:54:24.845591 systemd[1669]: Reached target sockets.target - Sockets. Jul 15 23:54:24.845677 systemd[1669]: Reached target basic.target - Basic System. Jul 15 23:54:24.845760 systemd[1669]: Reached target default.target - Main User Target. Jul 15 23:54:24.845811 systemd[1669]: Startup finished in 422ms. Jul 15 23:54:24.846739 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:54:24.864774 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:54:24.947206 startup-script[1706]: INFO Starting startup scripts. Jul 15 23:54:24.954002 startup-script[1706]: INFO No startup scripts found in metadata. Jul 15 23:54:24.954086 startup-script[1706]: INFO Finished running startup scripts. Jul 15 23:54:24.980937 init.sh[1588]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 15 23:54:24.980937 init.sh[1588]: + daemon_pids=() Jul 15 23:54:24.981122 init.sh[1588]: + for d in accounts clock_skew network Jul 15 23:54:24.981303 init.sh[1588]: + daemon_pids+=($!) Jul 15 23:54:24.981505 init.sh[1588]: + for d in accounts clock_skew network Jul 15 23:54:24.981871 init.sh[1712]: + /usr/bin/google_accounts_daemon Jul 15 23:54:24.982248 init.sh[1588]: + daemon_pids+=($!) Jul 15 23:54:24.982248 init.sh[1588]: + for d in accounts clock_skew network Jul 15 23:54:24.982248 init.sh[1588]: + daemon_pids+=($!) Jul 15 23:54:24.982248 init.sh[1588]: + NOTIFY_SOCKET=/run/systemd/notify Jul 15 23:54:24.982248 init.sh[1588]: + /usr/bin/systemd-notify --ready Jul 15 23:54:24.983728 init.sh[1713]: + /usr/bin/google_clock_skew_daemon Jul 15 23:54:24.984069 init.sh[1714]: + /usr/bin/google_network_daemon Jul 15 23:54:25.002455 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jul 15 23:54:25.020503 init.sh[1588]: + wait -n 1712 1713 1714 Jul 15 23:54:25.121601 systemd[1]: Started sshd@1-10.128.0.4:22-139.178.89.65:41246.service - OpenSSH per-connection server daemon (139.178.89.65:41246). Jul 15 23:54:25.482912 google-clock-skew[1713]: INFO Starting Google Clock Skew daemon. Jul 15 23:54:25.496433 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 41246 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:25.501381 google-clock-skew[1713]: INFO Clock drift token has changed: 0. Jul 15 23:54:25.501791 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:25.519777 systemd-logind[1512]: New session 2 of user core. Jul 15 23:54:25.521968 google-networking[1714]: INFO Starting Google Networking daemon. Jul 15 23:54:25.522917 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:54:25.588176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:25.600705 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:54:25.605123 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:54:25.610961 systemd[1]: Startup finished in 4.423s (kernel) + 7.893s (initrd) + 8.934s (userspace) = 21.252s. Jul 15 23:54:25.673262 groupadd[1733]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 15 23:54:25.678726 groupadd[1733]: group added to /etc/gshadow: name=google-sudoers Jul 15 23:54:25.736043 sshd[1730]: Connection closed by 139.178.89.65 port 41246 Jul 15 23:54:25.738727 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:25.746737 systemd[1]: sshd@1-10.128.0.4:22-139.178.89.65:41246.service: Deactivated successfully. Jul 15 23:54:25.749799 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:54:25.751886 systemd-logind[1512]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:54:25.754823 systemd-logind[1512]: Removed session 2. Jul 15 23:54:25.755649 groupadd[1733]: new group: name=google-sudoers, GID=1000 Jul 15 23:54:25.790587 systemd[1]: Started sshd@2-10.128.0.4:22-139.178.89.65:41250.service - OpenSSH per-connection server daemon (139.178.89.65:41250). Jul 15 23:54:25.804589 google-accounts[1712]: INFO Starting Google Accounts daemon. Jul 15 23:54:25.827193 google-accounts[1712]: WARNING OS Login not installed. Jul 15 23:54:25.831569 google-accounts[1712]: INFO Creating a new user account for 0. Jul 15 23:54:25.841323 init.sh[1753]: useradd: invalid user name '0': use --badname to ignore Jul 15 23:54:25.841895 google-accounts[1712]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 15 23:54:26.001123 systemd-resolved[1384]: Clock change detected. Flushing caches. Jul 15 23:54:26.001632 google-clock-skew[1713]: INFO Synced system time with hardware clock. Jul 15 23:54:26.224069 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 41250 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:26.226481 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:26.235391 systemd-logind[1512]: New session 3 of user core. Jul 15 23:54:26.240616 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:54:26.433649 sshd[1760]: Connection closed by 139.178.89.65 port 41250 Jul 15 23:54:26.434609 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:26.442024 systemd[1]: sshd@2-10.128.0.4:22-139.178.89.65:41250.service: Deactivated successfully. Jul 15 23:54:26.445426 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:54:26.449730 systemd-logind[1512]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:54:26.451657 systemd-logind[1512]: Removed session 3. Jul 15 23:54:26.488093 systemd[1]: Started sshd@3-10.128.0.4:22-139.178.89.65:41260.service - OpenSSH per-connection server daemon (139.178.89.65:41260). Jul 15 23:54:26.800768 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 41260 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:26.802749 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:26.805876 kubelet[1735]: E0715 23:54:26.805808 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:54:26.810486 systemd-logind[1512]: New session 4 of user core. Jul 15 23:54:26.811906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:54:26.812416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:54:26.813084 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 266.2M memory peak. Jul 15 23:54:26.821819 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:54:27.013216 sshd[1770]: Connection closed by 139.178.89.65 port 41260 Jul 15 23:54:27.014105 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:27.020136 systemd[1]: sshd@3-10.128.0.4:22-139.178.89.65:41260.service: Deactivated successfully. Jul 15 23:54:27.022797 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:54:27.024032 systemd-logind[1512]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:54:27.026515 systemd-logind[1512]: Removed session 4. Jul 15 23:54:27.067202 systemd[1]: Started sshd@4-10.128.0.4:22-139.178.89.65:41276.service - OpenSSH per-connection server daemon (139.178.89.65:41276). Jul 15 23:54:27.370228 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 41276 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:27.372223 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:27.380388 systemd-logind[1512]: New session 5 of user core. Jul 15 23:54:27.389686 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:54:27.568834 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:54:27.569395 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:54:27.587930 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 15 23:54:27.631161 sshd[1778]: Connection closed by 139.178.89.65 port 41276 Jul 15 23:54:27.632632 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:27.638272 systemd[1]: sshd@4-10.128.0.4:22-139.178.89.65:41276.service: Deactivated successfully. Jul 15 23:54:27.640840 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:54:27.643978 systemd-logind[1512]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:54:27.646382 systemd-logind[1512]: Removed session 5. Jul 15 23:54:27.698559 systemd[1]: Started sshd@5-10.128.0.4:22-139.178.89.65:41284.service - OpenSSH per-connection server daemon (139.178.89.65:41284). Jul 15 23:54:28.010696 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 41284 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:28.012529 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:28.020292 systemd-logind[1512]: New session 6 of user core. Jul 15 23:54:28.026661 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:54:28.226520 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:54:28.227030 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:54:28.235081 sudo[1789]: pam_unix(sudo:session): session closed for user root Jul 15 23:54:28.249826 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:54:28.250377 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:54:28.263866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:54:28.323221 augenrules[1811]: No rules Jul 15 23:54:28.325111 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:54:28.325571 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:54:28.327075 sudo[1788]: pam_unix(sudo:session): session closed for user root Jul 15 23:54:28.370654 sshd[1787]: Connection closed by 139.178.89.65 port 41284 Jul 15 23:54:28.371660 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:28.378305 systemd[1]: sshd@5-10.128.0.4:22-139.178.89.65:41284.service: Deactivated successfully. Jul 15 23:54:28.380805 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:54:28.382043 systemd-logind[1512]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:54:28.384704 systemd-logind[1512]: Removed session 6. Jul 15 23:54:28.426234 systemd[1]: Started sshd@6-10.128.0.4:22-139.178.89.65:41292.service - OpenSSH per-connection server daemon (139.178.89.65:41292). Jul 15 23:54:28.740616 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 41292 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:54:28.742417 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:28.750631 systemd-logind[1512]: New session 7 of user core. Jul 15 23:54:28.761665 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:54:28.921963 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:54:28.922509 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:54:29.452646 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:54:29.483100 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:54:29.827407 dockerd[1841]: time="2025-07-15T23:54:29.827191379Z" level=info msg="Starting up" Jul 15 23:54:29.829111 dockerd[1841]: time="2025-07-15T23:54:29.829046412Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:54:29.878280 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3904563794-merged.mount: Deactivated successfully. Jul 15 23:54:30.033115 dockerd[1841]: time="2025-07-15T23:54:30.032826732Z" level=info msg="Loading containers: start." Jul 15 23:54:30.054383 kernel: Initializing XFRM netlink socket Jul 15 23:54:30.437715 systemd-networkd[1451]: docker0: Link UP Jul 15 23:54:30.446356 dockerd[1841]: time="2025-07-15T23:54:30.446281719Z" level=info msg="Loading containers: done." Jul 15 23:54:30.468637 dockerd[1841]: time="2025-07-15T23:54:30.468566277Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:54:30.468871 dockerd[1841]: time="2025-07-15T23:54:30.468707642Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:54:30.468938 dockerd[1841]: time="2025-07-15T23:54:30.468911613Z" level=info msg="Initializing buildkit" Jul 15 23:54:30.469015 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1488129827-merged.mount: Deactivated successfully. Jul 15 23:54:30.508398 dockerd[1841]: time="2025-07-15T23:54:30.508332752Z" level=info msg="Completed buildkit initialization" Jul 15 23:54:30.519224 dockerd[1841]: time="2025-07-15T23:54:30.519131327Z" level=info msg="Daemon has completed initialization" Jul 15 23:54:30.520187 dockerd[1841]: time="2025-07-15T23:54:30.519409626Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:54:30.519624 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:54:31.550329 containerd[1534]: time="2025-07-15T23:54:31.550233537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 15 23:54:32.156691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255833458.mount: Deactivated successfully. Jul 15 23:54:33.755547 containerd[1534]: time="2025-07-15T23:54:33.755467778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:33.757127 containerd[1534]: time="2025-07-15T23:54:33.757064715Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28806622" Jul 15 23:54:33.758946 containerd[1534]: time="2025-07-15T23:54:33.758863026Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:33.764531 containerd[1534]: time="2025-07-15T23:54:33.764337757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:33.765553 containerd[1534]: time="2025-07-15T23:54:33.765483579Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.21519106s" Jul 15 23:54:33.766069 containerd[1534]: time="2025-07-15T23:54:33.765830277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Jul 15 23:54:33.766956 containerd[1534]: time="2025-07-15T23:54:33.766885379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 15 23:54:35.360245 containerd[1534]: time="2025-07-15T23:54:35.360113224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:35.361710 containerd[1534]: time="2025-07-15T23:54:35.361644363Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24785570" Jul 15 23:54:35.363626 containerd[1534]: time="2025-07-15T23:54:35.363570900Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:35.369341 containerd[1534]: time="2025-07-15T23:54:35.369022135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:35.370031 containerd[1534]: time="2025-07-15T23:54:35.369966137Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.603039196s" Jul 15 23:54:35.370031 containerd[1534]: time="2025-07-15T23:54:35.370026418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Jul 15 23:54:35.371055 containerd[1534]: time="2025-07-15T23:54:35.371000661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 15 23:54:36.769936 containerd[1534]: time="2025-07-15T23:54:36.769848229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:36.771485 containerd[1534]: time="2025-07-15T23:54:36.771422201Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19178837" Jul 15 23:54:36.773663 containerd[1534]: time="2025-07-15T23:54:36.773581061Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:36.778337 containerd[1534]: time="2025-07-15T23:54:36.778110248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:36.782291 containerd[1534]: time="2025-07-15T23:54:36.782218315Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.411175982s" Jul 15 23:54:36.782291 containerd[1534]: time="2025-07-15T23:54:36.782289994Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Jul 15 23:54:36.783477 containerd[1534]: time="2025-07-15T23:54:36.783414753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 15 23:54:37.062789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:54:37.066100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:54:37.524246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:37.540191 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:54:37.654106 kubelet[2117]: E0715 23:54:37.653978 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:54:37.663835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:54:37.664111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:54:37.665940 systemd[1]: kubelet.service: Consumed 260ms CPU time, 107.8M memory peak. Jul 15 23:54:38.141038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510178671.mount: Deactivated successfully. Jul 15 23:54:38.827209 containerd[1534]: time="2025-07-15T23:54:38.827129724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:38.829088 containerd[1534]: time="2025-07-15T23:54:38.829020093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30897275" Jul 15 23:54:38.830829 containerd[1534]: time="2025-07-15T23:54:38.830746494Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:38.834420 containerd[1534]: time="2025-07-15T23:54:38.834307343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:38.835505 containerd[1534]: time="2025-07-15T23:54:38.835269858Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 2.051799359s" Jul 15 23:54:38.835505 containerd[1534]: time="2025-07-15T23:54:38.835335108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Jul 15 23:54:38.836255 containerd[1534]: time="2025-07-15T23:54:38.836107447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:54:39.302292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128714670.mount: Deactivated successfully. Jul 15 23:54:40.511450 containerd[1534]: time="2025-07-15T23:54:40.511241348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:40.512996 containerd[1534]: time="2025-07-15T23:54:40.512934403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jul 15 23:54:40.515346 containerd[1534]: time="2025-07-15T23:54:40.515022619Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:40.520800 containerd[1534]: time="2025-07-15T23:54:40.520047299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:40.521739 containerd[1534]: time="2025-07-15T23:54:40.521684732Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.685508963s" Jul 15 23:54:40.521873 containerd[1534]: time="2025-07-15T23:54:40.521743352Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 23:54:40.522642 containerd[1534]: time="2025-07-15T23:54:40.522555443Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:54:40.973151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802648777.mount: Deactivated successfully. Jul 15 23:54:40.982087 containerd[1534]: time="2025-07-15T23:54:40.982023708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:54:40.983240 containerd[1534]: time="2025-07-15T23:54:40.983162718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jul 15 23:54:40.985573 containerd[1534]: time="2025-07-15T23:54:40.985372833Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:54:40.990606 containerd[1534]: time="2025-07-15T23:54:40.989175212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:54:40.990908 containerd[1534]: time="2025-07-15T23:54:40.990865938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.243584ms" Jul 15 23:54:40.991073 containerd[1534]: time="2025-07-15T23:54:40.991047191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:54:40.991809 containerd[1534]: time="2025-07-15T23:54:40.991770939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 23:54:41.528205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100185809.mount: Deactivated successfully. Jul 15 23:54:43.830023 containerd[1534]: time="2025-07-15T23:54:43.829949343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:43.831334 containerd[1534]: time="2025-07-15T23:54:43.831104698Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" Jul 15 23:54:43.833464 containerd[1534]: time="2025-07-15T23:54:43.833388136Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:43.838844 containerd[1534]: time="2025-07-15T23:54:43.838729409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:54:43.840501 containerd[1534]: time="2025-07-15T23:54:43.840386566Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.84857311s" Jul 15 23:54:43.840501 containerd[1534]: time="2025-07-15T23:54:43.840443245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 15 23:54:46.796204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:46.796536 systemd[1]: kubelet.service: Consumed 260ms CPU time, 107.8M memory peak. Jul 15 23:54:46.800347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:54:46.847106 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-7.scope)... Jul 15 23:54:46.847131 systemd[1]: Reloading... Jul 15 23:54:47.027367 zram_generator::config[2311]: No configuration found. Jul 15 23:54:47.187786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:54:47.361703 systemd[1]: Reloading finished in 513 ms. Jul 15 23:54:47.441838 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:54:47.441980 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:54:47.442440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:47.442515 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.3M memory peak. Jul 15 23:54:47.445409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:54:47.983145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:47.998054 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:54:48.071619 kubelet[2362]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:54:48.072457 kubelet[2362]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:54:48.072457 kubelet[2362]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:54:48.074339 kubelet[2362]: I0715 23:54:48.072678 2362 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:54:48.765635 kubelet[2362]: I0715 23:54:48.765571 2362 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:54:48.765635 kubelet[2362]: I0715 23:54:48.765612 2362 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:54:48.766093 kubelet[2362]: I0715 23:54:48.766056 2362 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:54:48.818037 kubelet[2362]: E0715 23:54:48.817965 2362 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:48.818518 kubelet[2362]: I0715 23:54:48.818114 2362 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:54:48.835866 kubelet[2362]: I0715 23:54:48.835808 2362 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:54:48.840877 kubelet[2362]: I0715 23:54:48.840837 2362 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:54:48.841825 kubelet[2362]: I0715 23:54:48.841251 2362 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:54:48.841825 kubelet[2362]: I0715 23:54:48.841296 2362 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:54:48.841825 kubelet[2362]: I0715 23:54:48.841599 2362 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:54:48.841825 kubelet[2362]: I0715 23:54:48.841618 2362 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:54:48.844938 kubelet[2362]: I0715 23:54:48.844797 2362 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:54:48.853089 kubelet[2362]: I0715 23:54:48.852935 2362 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:54:48.853089 kubelet[2362]: I0715 23:54:48.853016 2362 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:54:48.853089 kubelet[2362]: I0715 23:54:48.853058 2362 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:54:48.853409 kubelet[2362]: I0715 23:54:48.853117 2362 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:54:48.862254 kubelet[2362]: W0715 23:54:48.862181 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Jul 15 23:54:48.862464 kubelet[2362]: E0715 23:54:48.862275 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:48.863458 kubelet[2362]: W0715 23:54:48.862880 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Jul 15 23:54:48.863458 kubelet[2362]: E0715 23:54:48.862955 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:48.863458 kubelet[2362]: I0715 23:54:48.863457 2362 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:54:48.864253 kubelet[2362]: I0715 23:54:48.864159 2362 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:54:48.866837 kubelet[2362]: W0715 23:54:48.866185 2362 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:54:48.870101 kubelet[2362]: I0715 23:54:48.870065 2362 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:54:48.870327 kubelet[2362]: I0715 23:54:48.870291 2362 server.go:1287] "Started kubelet" Jul 15 23:54:48.871268 kubelet[2362]: I0715 23:54:48.871205 2362 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:54:48.872713 kubelet[2362]: I0715 23:54:48.872663 2362 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:54:48.877864 kubelet[2362]: I0715 23:54:48.877808 2362 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:54:48.879335 kubelet[2362]: I0715 23:54:48.878808 2362 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:54:48.879335 kubelet[2362]: I0715 23:54:48.879151 2362 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:54:48.883028 kubelet[2362]: E0715 23:54:48.880449 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e.185291f5900b9e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,UID:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,},FirstTimestamp:2025-07-15 23:54:48.870239843 +0000 UTC m=+0.865754804,LastTimestamp:2025-07-15 23:54:48.870239843 +0000 UTC m=+0.865754804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,}" Jul 15 23:54:48.886622 kubelet[2362]: I0715 23:54:48.886591 2362 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:54:48.888436 kubelet[2362]: I0715 23:54:48.888409 2362 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:54:48.888794 kubelet[2362]: E0715 23:54:48.888763 2362 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" Jul 15 23:54:48.889252 kubelet[2362]: I0715 23:54:48.889228 2362 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:54:48.889364 kubelet[2362]: I0715 23:54:48.889304 2362 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:54:48.891514 kubelet[2362]: W0715 23:54:48.891423 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Jul 15 23:54:48.891640 kubelet[2362]: E0715 23:54:48.891534 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:48.892202 kubelet[2362]: I0715 23:54:48.892108 2362 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:54:48.892440 kubelet[2362]: I0715 23:54:48.892294 2362 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:54:48.893455 kubelet[2362]: E0715 23:54:48.892458 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="200ms" Jul 15 23:54:48.893455 kubelet[2362]: E0715 23:54:48.892999 2362 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:54:48.895969 kubelet[2362]: I0715 23:54:48.895948 2362 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:54:48.927087 kubelet[2362]: I0715 23:54:48.926838 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:54:48.932384 kubelet[2362]: I0715 23:54:48.932216 2362 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:54:48.932384 kubelet[2362]: I0715 23:54:48.932369 2362 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:54:48.932384 kubelet[2362]: I0715 23:54:48.932396 2362 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:54:48.933121 kubelet[2362]: I0715 23:54:48.932767 2362 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:54:48.933121 kubelet[2362]: I0715 23:54:48.932796 2362 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:54:48.933121 kubelet[2362]: I0715 23:54:48.932826 2362 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:54:48.933121 kubelet[2362]: I0715 23:54:48.932850 2362 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:54:48.933121 kubelet[2362]: E0715 23:54:48.932918 2362 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:54:48.935163 kubelet[2362]: W0715 23:54:48.934977 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Jul 15 23:54:48.935493 kubelet[2362]: E0715 23:54:48.935217 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:48.936917 kubelet[2362]: I0715 23:54:48.936881 2362 policy_none.go:49] "None policy: Start" Jul 15 23:54:48.937358 kubelet[2362]: I0715 23:54:48.937072 2362 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:54:48.937358 kubelet[2362]: I0715 23:54:48.937104 2362 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:54:48.950353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:54:48.966401 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:54:48.971910 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:54:48.983343 kubelet[2362]: I0715 23:54:48.982971 2362 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:54:48.983696 kubelet[2362]: I0715 23:54:48.983667 2362 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:54:48.983805 kubelet[2362]: I0715 23:54:48.983695 2362 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:54:48.984435 kubelet[2362]: I0715 23:54:48.984299 2362 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:54:48.986454 kubelet[2362]: E0715 23:54:48.985834 2362 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:54:48.986454 kubelet[2362]: E0715 23:54:48.985905 2362 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" Jul 15 23:54:49.003710 kubelet[2362]: E0715 23:54:49.003562 2362 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e.185291f5900b9e63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,UID:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,},FirstTimestamp:2025-07-15 23:54:48.870239843 +0000 UTC m=+0.865754804,LastTimestamp:2025-07-15 23:54:48.870239843 +0000 UTC m=+0.865754804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,}" Jul 15 23:54:49.058467 systemd[1]: Created slice kubepods-burstable-podc3f4a4af78b4ddb03f81542ed6372814.slice - libcontainer container kubepods-burstable-podc3f4a4af78b4ddb03f81542ed6372814.slice. Jul 15 23:54:49.078956 kubelet[2362]: E0715 23:54:49.078868 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.088619 systemd[1]: Created slice kubepods-burstable-poda6f4b7868c46c9d60527a262acdec21f.slice - libcontainer container kubepods-burstable-poda6f4b7868c46c9d60527a262acdec21f.slice. Jul 15 23:54:49.091098 kubelet[2362]: I0715 23:54:49.089963 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091098 kubelet[2362]: I0715 23:54:49.090020 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091098 kubelet[2362]: I0715 23:54:49.090050 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091098 kubelet[2362]: I0715 23:54:49.090127 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c8e1bb6a4ec3c29f5e16fad0eacd565-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"2c8e1bb6a4ec3c29f5e16fad0eacd565\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091464 kubelet[2362]: I0715 23:54:49.090206 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3f4a4af78b4ddb03f81542ed6372814-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"c3f4a4af78b4ddb03f81542ed6372814\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091464 kubelet[2362]: I0715 23:54:49.090279 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f4a4af78b4ddb03f81542ed6372814-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"c3f4a4af78b4ddb03f81542ed6372814\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091464 kubelet[2362]: I0715 23:54:49.090540 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f4a4af78b4ddb03f81542ed6372814-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"c3f4a4af78b4ddb03f81542ed6372814\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091464 kubelet[2362]: I0715 23:54:49.090962 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.091665 kubelet[2362]: I0715 23:54:49.091054 2362 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.094118 kubelet[2362]: I0715 23:54:49.093048 2362 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.094262 kubelet[2362]: E0715 23:54:49.094105 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="400ms" Jul 15 23:54:49.094262 kubelet[2362]: E0715 23:54:49.094181 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.094788 kubelet[2362]: E0715 23:54:49.094759 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.108912 systemd[1]: Created slice kubepods-burstable-pod2c8e1bb6a4ec3c29f5e16fad0eacd565.slice - libcontainer container kubepods-burstable-pod2c8e1bb6a4ec3c29f5e16fad0eacd565.slice. Jul 15 23:54:49.112554 kubelet[2362]: E0715 23:54:49.112510 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.299884 kubelet[2362]: I0715 23:54:49.299809 2362 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.300282 kubelet[2362]: E0715 23:54:49.300242 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.381937 containerd[1534]: time="2025-07-15T23:54:49.381756944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,Uid:c3f4a4af78b4ddb03f81542ed6372814,Namespace:kube-system,Attempt:0,}" Jul 15 23:54:49.396555 containerd[1534]: time="2025-07-15T23:54:49.396485778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,Uid:a6f4b7868c46c9d60527a262acdec21f,Namespace:kube-system,Attempt:0,}" Jul 15 23:54:49.429365 containerd[1534]: time="2025-07-15T23:54:49.429011048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,Uid:2c8e1bb6a4ec3c29f5e16fad0eacd565,Namespace:kube-system,Attempt:0,}" Jul 15 23:54:49.432619 containerd[1534]: time="2025-07-15T23:54:49.432556387Z" level=info msg="connecting to shim d54a455c5f6eb9bd1206753fd78bbd3a4de7dd76923857fe90956f432fe55089" address="unix:///run/containerd/s/0aa7eaf3aed5ecb6cbce88e0d23db47d18ede4d1f277f8f1179ffeadfd121919" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:54:49.469459 containerd[1534]: time="2025-07-15T23:54:49.469396012Z" level=info msg="connecting to shim 713d81e9b798ce99419645f9f0ff7fbbe8acb32d0401f0fcd8bcd0ffb94f5c2c" address="unix:///run/containerd/s/f896c1446f186aa1799b9b3238472e368e7ab9e868a5dbef5c310b001a86d8af" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:54:49.500998 kubelet[2362]: E0715 23:54:49.500902 2362 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="800ms" Jul 15 23:54:49.504102 containerd[1534]: time="2025-07-15T23:54:49.503935790Z" level=info msg="connecting to shim 56de5d5d8a9fcfedf1cbe763c171df852c8da6af26cdbda27dbdeb18335cb111" address="unix:///run/containerd/s/a4cc3abaa72b3d94bf1b8d4ae43c21f1a92556e7458b7248c828e36b4f9bc732" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:54:49.521700 systemd[1]: Started cri-containerd-d54a455c5f6eb9bd1206753fd78bbd3a4de7dd76923857fe90956f432fe55089.scope - libcontainer container d54a455c5f6eb9bd1206753fd78bbd3a4de7dd76923857fe90956f432fe55089. Jul 15 23:54:49.559860 systemd[1]: Started cri-containerd-713d81e9b798ce99419645f9f0ff7fbbe8acb32d0401f0fcd8bcd0ffb94f5c2c.scope - libcontainer container 713d81e9b798ce99419645f9f0ff7fbbe8acb32d0401f0fcd8bcd0ffb94f5c2c. Jul 15 23:54:49.577952 systemd[1]: Started cri-containerd-56de5d5d8a9fcfedf1cbe763c171df852c8da6af26cdbda27dbdeb18335cb111.scope - libcontainer container 56de5d5d8a9fcfedf1cbe763c171df852c8da6af26cdbda27dbdeb18335cb111. Jul 15 23:54:49.671389 containerd[1534]: time="2025-07-15T23:54:49.671211993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,Uid:c3f4a4af78b4ddb03f81542ed6372814,Namespace:kube-system,Attempt:0,} returns sandbox id \"d54a455c5f6eb9bd1206753fd78bbd3a4de7dd76923857fe90956f432fe55089\"" Jul 15 23:54:49.676337 kubelet[2362]: E0715 23:54:49.675633 2362 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22" Jul 15 23:54:49.679666 containerd[1534]: time="2025-07-15T23:54:49.679600356Z" level=info msg="CreateContainer within sandbox \"d54a455c5f6eb9bd1206753fd78bbd3a4de7dd76923857fe90956f432fe55089\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:54:49.696281 containerd[1534]: time="2025-07-15T23:54:49.696227245Z" level=info msg="Container f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:54:49.706766 kubelet[2362]: I0715 23:54:49.706188 2362 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.706766 kubelet[2362]: E0715 23:54:49.706675 2362 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:49.726193 containerd[1534]: time="2025-07-15T23:54:49.725925185Z" level=info msg="CreateContainer within sandbox \"d54a455c5f6eb9bd1206753fd78bbd3a4de7dd76923857fe90956f432fe55089\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d\"" Jul 15 23:54:49.729288 containerd[1534]: time="2025-07-15T23:54:49.729200389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,Uid:a6f4b7868c46c9d60527a262acdec21f,Namespace:kube-system,Attempt:0,} returns sandbox id \"713d81e9b798ce99419645f9f0ff7fbbe8acb32d0401f0fcd8bcd0ffb94f5c2c\"" Jul 15 23:54:49.730294 containerd[1534]: time="2025-07-15T23:54:49.729978193Z" level=info msg="StartContainer for \"f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d\"" Jul 15 23:54:49.733162 containerd[1534]: time="2025-07-15T23:54:49.733114745Z" level=info msg="connecting to shim f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d" address="unix:///run/containerd/s/0aa7eaf3aed5ecb6cbce88e0d23db47d18ede4d1f277f8f1179ffeadfd121919" protocol=ttrpc version=3 Jul 15 23:54:49.733510 containerd[1534]: time="2025-07-15T23:54:49.733473694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e,Uid:2c8e1bb6a4ec3c29f5e16fad0eacd565,Namespace:kube-system,Attempt:0,} returns sandbox id \"56de5d5d8a9fcfedf1cbe763c171df852c8da6af26cdbda27dbdeb18335cb111\"" Jul 15 23:54:49.737072 kubelet[2362]: E0715 23:54:49.736607 2362 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb15" Jul 15 23:54:49.738231 kubelet[2362]: E0715 23:54:49.738190 2362 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22" Jul 15 23:54:49.742112 containerd[1534]: time="2025-07-15T23:54:49.739987891Z" level=info msg="CreateContainer within sandbox \"713d81e9b798ce99419645f9f0ff7fbbe8acb32d0401f0fcd8bcd0ffb94f5c2c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:54:49.742959 containerd[1534]: time="2025-07-15T23:54:49.742779283Z" level=info msg="CreateContainer within sandbox \"56de5d5d8a9fcfedf1cbe763c171df852c8da6af26cdbda27dbdeb18335cb111\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:54:49.755919 containerd[1534]: time="2025-07-15T23:54:49.755857080Z" level=info msg="Container d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:54:49.768631 systemd[1]: Started cri-containerd-f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d.scope - libcontainer container f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d. Jul 15 23:54:49.772927 containerd[1534]: time="2025-07-15T23:54:49.772864675Z" level=info msg="Container 8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:54:49.779680 containerd[1534]: time="2025-07-15T23:54:49.779619232Z" level=info msg="CreateContainer within sandbox \"713d81e9b798ce99419645f9f0ff7fbbe8acb32d0401f0fcd8bcd0ffb94f5c2c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074\"" Jul 15 23:54:49.780719 containerd[1534]: time="2025-07-15T23:54:49.780679297Z" level=info msg="StartContainer for \"d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074\"" Jul 15 23:54:49.784397 containerd[1534]: time="2025-07-15T23:54:49.784270785Z" level=info msg="connecting to shim d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074" address="unix:///run/containerd/s/f896c1446f186aa1799b9b3238472e368e7ab9e868a5dbef5c310b001a86d8af" protocol=ttrpc version=3 Jul 15 23:54:49.786185 kubelet[2362]: W0715 23:54:49.786020 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Jul 15 23:54:49.786185 kubelet[2362]: E0715 23:54:49.786136 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:49.792279 containerd[1534]: time="2025-07-15T23:54:49.792209203Z" level=info msg="CreateContainer within sandbox \"56de5d5d8a9fcfedf1cbe763c171df852c8da6af26cdbda27dbdeb18335cb111\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a\"" Jul 15 23:54:49.794161 containerd[1534]: time="2025-07-15T23:54:49.794091519Z" level=info msg="StartContainer for \"8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a\"" Jul 15 23:54:49.802737 containerd[1534]: time="2025-07-15T23:54:49.801303085Z" level=info msg="connecting to shim 8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a" address="unix:///run/containerd/s/a4cc3abaa72b3d94bf1b8d4ae43c21f1a92556e7458b7248c828e36b4f9bc732" protocol=ttrpc version=3 Jul 15 23:54:49.834599 systemd[1]: Started cri-containerd-8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a.scope - libcontainer container 8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a. Jul 15 23:54:49.845264 systemd[1]: Started cri-containerd-d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074.scope - libcontainer container d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074. Jul 15 23:54:49.926725 containerd[1534]: time="2025-07-15T23:54:49.925632617Z" level=info msg="StartContainer for \"f7221ae62859cc52efbd688a34ee6ae86f876f5c26f153d13cae5d941ef3fd7d\" returns successfully" Jul 15 23:54:49.958763 kubelet[2362]: W0715 23:54:49.958560 2362 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Jul 15 23:54:49.959174 kubelet[2362]: E0715 23:54:49.959032 2362 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e&limit=500&resourceVersion=0\": dial tcp 10.128.0.4:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:54:49.979042 kubelet[2362]: E0715 23:54:49.978718 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:50.001280 containerd[1534]: time="2025-07-15T23:54:50.001221299Z" level=info msg="StartContainer for \"8ebecde35e340e5989247a134807d4983a600dec1fbfb48ea38a7a92eb12929a\" returns successfully" Jul 15 23:54:50.059248 containerd[1534]: time="2025-07-15T23:54:50.059190606Z" level=info msg="StartContainer for \"d0815c65c6bb46b7c5ff4eaf5815c6fd0845d8308deaaaf967b2eb6554c94074\" returns successfully" Jul 15 23:54:50.524060 kubelet[2362]: I0715 23:54:50.523563 2362 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:50.990092 kubelet[2362]: E0715 23:54:50.989953 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:50.991852 kubelet[2362]: E0715 23:54:50.990797 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:50.993426 kubelet[2362]: E0715 23:54:50.993194 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:51.994098 kubelet[2362]: E0715 23:54:51.994047 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:51.994822 kubelet[2362]: E0715 23:54:51.994462 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:52.995611 kubelet[2362]: E0715 23:54:52.995538 2362 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:53.865921 kubelet[2362]: I0715 23:54:53.865867 2362 apiserver.go:52] "Watching apiserver" Jul 15 23:54:53.989976 kubelet[2362]: I0715 23:54:53.989931 2362 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:54:53.999137 kubelet[2362]: E0715 23:54:53.997372 2362 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.114650 kubelet[2362]: I0715 23:54:54.114598 2362 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.114650 kubelet[2362]: E0715 23:54:54.114663 2362 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\": node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" Jul 15 23:54:54.190284 kubelet[2362]: I0715 23:54:54.189653 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.212912 kubelet[2362]: E0715 23:54:54.212852 2362 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.213332 kubelet[2362]: I0715 23:54:54.213195 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.226307 kubelet[2362]: E0715 23:54:54.226230 2362 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.226697 kubelet[2362]: I0715 23:54:54.226275 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.232046 kubelet[2362]: E0715 23:54:54.231993 2362 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:54.270905 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 15 23:54:55.989814 systemd[1]: Reload requested from client PID 2641 ('systemctl') (unit session-7.scope)... Jul 15 23:54:55.989838 systemd[1]: Reloading... Jul 15 23:54:56.162341 zram_generator::config[2685]: No configuration found. Jul 15 23:54:56.281047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:54:56.336966 kubelet[2362]: I0715 23:54:56.336403 2362 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:56.346345 kubelet[2362]: W0715 23:54:56.346200 2362 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:54:56.484649 systemd[1]: Reloading finished in 494 ms. Jul 15 23:54:56.530095 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:54:56.545156 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:54:56.545573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:56.545685 systemd[1]: kubelet.service: Consumed 1.438s CPU time, 130.9M memory peak. Jul 15 23:54:56.548540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:54:56.857234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:54:56.872100 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:54:56.954486 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:54:56.954486 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:54:56.954486 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:54:56.954486 kubelet[2733]: I0715 23:54:56.954242 2733 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:54:56.966610 kubelet[2733]: I0715 23:54:56.965839 2733 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:54:56.966610 kubelet[2733]: I0715 23:54:56.965880 2733 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:54:56.968627 kubelet[2733]: I0715 23:54:56.968586 2733 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:54:56.976363 kubelet[2733]: I0715 23:54:56.975637 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:54:56.982246 kubelet[2733]: I0715 23:54:56.982206 2733 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:54:56.990856 kubelet[2733]: I0715 23:54:56.990816 2733 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:54:56.996068 kubelet[2733]: I0715 23:54:56.996004 2733 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:54:56.997931 kubelet[2733]: I0715 23:54:56.996512 2733 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:54:56.997931 kubelet[2733]: I0715 23:54:56.996587 2733 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:54:56.997931 kubelet[2733]: I0715 23:54:56.996877 2733 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:54:56.997931 kubelet[2733]: I0715 23:54:56.996895 2733 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:54:56.998394 kubelet[2733]: I0715 23:54:56.996966 2733 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:54:56.998394 kubelet[2733]: I0715 23:54:56.997216 2733 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:54:56.998394 kubelet[2733]: I0715 23:54:56.997835 2733 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:54:56.998394 kubelet[2733]: I0715 23:54:56.997891 2733 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:54:56.998394 kubelet[2733]: I0715 23:54:56.997910 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:54:57.003489 kubelet[2733]: I0715 23:54:57.001390 2733 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:54:57.003489 kubelet[2733]: I0715 23:54:57.002016 2733 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:54:57.003489 kubelet[2733]: I0715 23:54:57.002747 2733 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:54:57.003489 kubelet[2733]: I0715 23:54:57.002784 2733 server.go:1287] "Started kubelet" Jul 15 23:54:57.007193 kubelet[2733]: I0715 23:54:57.005724 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:54:57.016235 kubelet[2733]: I0715 23:54:57.015862 2733 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:54:57.018652 kubelet[2733]: I0715 23:54:57.017930 2733 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:54:57.021588 kubelet[2733]: I0715 23:54:57.019453 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:54:57.021588 kubelet[2733]: I0715 23:54:57.019740 2733 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:54:57.021588 kubelet[2733]: I0715 23:54:57.020050 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:54:57.023992 kubelet[2733]: I0715 23:54:57.022554 2733 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:54:57.023992 kubelet[2733]: E0715 23:54:57.022886 2733 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" not found" Jul 15 23:54:57.027595 kubelet[2733]: I0715 23:54:57.025943 2733 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:54:57.027595 kubelet[2733]: I0715 23:54:57.026275 2733 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:54:57.042405 kubelet[2733]: I0715 23:54:57.042363 2733 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:54:57.045156 kubelet[2733]: I0715 23:54:57.042669 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:54:57.078851 kubelet[2733]: E0715 23:54:57.078632 2733 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:54:57.080304 kubelet[2733]: I0715 23:54:57.079510 2733 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:54:57.086345 kubelet[2733]: I0715 23:54:57.086225 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:54:57.090632 kubelet[2733]: I0715 23:54:57.090489 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:54:57.090632 kubelet[2733]: I0715 23:54:57.090536 2733 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:54:57.090632 kubelet[2733]: I0715 23:54:57.090570 2733 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:54:57.090632 kubelet[2733]: I0715 23:54:57.090584 2733 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:54:57.091038 kubelet[2733]: E0715 23:54:57.090675 2733 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:54:57.177876 kubelet[2733]: I0715 23:54:57.177730 2733 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:54:57.177876 kubelet[2733]: I0715 23:54:57.177753 2733 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:54:57.177876 kubelet[2733]: I0715 23:54:57.177782 2733 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:54:57.179690 kubelet[2733]: I0715 23:54:57.178762 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:54:57.179690 kubelet[2733]: I0715 23:54:57.178784 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:54:57.179690 kubelet[2733]: I0715 23:54:57.178812 2733 policy_none.go:49] "None policy: Start" Jul 15 23:54:57.179690 kubelet[2733]: I0715 23:54:57.178828 2733 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:54:57.179690 kubelet[2733]: I0715 23:54:57.178845 2733 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:54:57.179690 kubelet[2733]: I0715 23:54:57.179010 2733 state_mem.go:75] "Updated machine memory state" Jul 15 23:54:57.193996 kubelet[2733]: E0715 23:54:57.193945 2733 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:54:57.196624 kubelet[2733]: I0715 23:54:57.195725 2733 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:54:57.196624 kubelet[2733]: I0715 23:54:57.195977 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:54:57.196624 kubelet[2733]: I0715 23:54:57.195992 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:54:57.197543 kubelet[2733]: I0715 23:54:57.197481 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:54:57.201429 kubelet[2733]: E0715 23:54:57.201394 2733 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:54:57.319186 kubelet[2733]: I0715 23:54:57.317911 2733 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.331969 kubelet[2733]: I0715 23:54:57.331910 2733 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.332151 kubelet[2733]: I0715 23:54:57.332026 2733 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.395377 kubelet[2733]: I0715 23:54:57.394946 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.395972 kubelet[2733]: I0715 23:54:57.395804 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.395972 kubelet[2733]: I0715 23:54:57.395867 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.404034 kubelet[2733]: W0715 23:54:57.403983 2733 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:54:57.409037 kubelet[2733]: W0715 23:54:57.408200 2733 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:54:57.409037 kubelet[2733]: E0715 23:54:57.408380 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" already exists" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.409037 kubelet[2733]: W0715 23:54:57.408436 2733 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:54:57.435956 kubelet[2733]: I0715 23:54:57.435403 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3f4a4af78b4ddb03f81542ed6372814-ca-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"c3f4a4af78b4ddb03f81542ed6372814\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.435956 kubelet[2733]: I0715 23:54:57.435518 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3f4a4af78b4ddb03f81542ed6372814-k8s-certs\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"c3f4a4af78b4ddb03f81542ed6372814\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.435956 kubelet[2733]: I0715 23:54:57.435565 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-ca-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.435956 kubelet[2733]: I0715 23:54:57.435599 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.436260 kubelet[2733]: I0715 23:54:57.435629 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c8e1bb6a4ec3c29f5e16fad0eacd565-kubeconfig\") pod \"kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"2c8e1bb6a4ec3c29f5e16fad0eacd565\") " pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.436260 kubelet[2733]: I0715 23:54:57.435658 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3f4a4af78b4ddb03f81542ed6372814-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"c3f4a4af78b4ddb03f81542ed6372814\") " pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.436260 kubelet[2733]: I0715 23:54:57.435689 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.436260 kubelet[2733]: I0715 23:54:57.435718 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:57.436476 kubelet[2733]: I0715 23:54:57.435746 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6f4b7868c46c9d60527a262acdec21f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" (UID: \"a6f4b7868c46c9d60527a262acdec21f\") " pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:58.000820 kubelet[2733]: I0715 23:54:58.000757 2733 apiserver.go:52] "Watching apiserver" Jul 15 23:54:58.026488 kubelet[2733]: I0715 23:54:58.026418 2733 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:54:58.143552 kubelet[2733]: I0715 23:54:58.143489 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:58.155799 kubelet[2733]: W0715 23:54:58.155626 2733 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Jul 15 23:54:58.156003 kubelet[2733]: E0715 23:54:58.155877 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:54:58.184160 kubelet[2733]: I0715 23:54:58.184050 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" podStartSLOduration=1.183971168 podStartE2EDuration="1.183971168s" podCreationTimestamp="2025-07-15 23:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:54:58.182641619 +0000 UTC m=+1.303225873" watchObservedRunningTime="2025-07-15 23:54:58.183971168 +0000 UTC m=+1.304555422" Jul 15 23:54:58.217839 kubelet[2733]: I0715 23:54:58.217725 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" podStartSLOduration=1.217696805 podStartE2EDuration="1.217696805s" podCreationTimestamp="2025-07-15 23:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:54:58.200927467 +0000 UTC m=+1.321511813" watchObservedRunningTime="2025-07-15 23:54:58.217696805 +0000 UTC m=+1.338281061" Jul 15 23:55:02.189651 kubelet[2733]: I0715 23:55:02.189610 2733 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:55:02.191179 kubelet[2733]: I0715 23:55:02.190570 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:55:02.191246 containerd[1534]: time="2025-07-15T23:55:02.190087958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:55:03.072475 kubelet[2733]: I0715 23:55:03.072372 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" podStartSLOduration=7.072291186 podStartE2EDuration="7.072291186s" podCreationTimestamp="2025-07-15 23:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:54:58.219582683 +0000 UTC m=+1.340166927" watchObservedRunningTime="2025-07-15 23:55:03.072291186 +0000 UTC m=+6.192875440" Jul 15 23:55:03.087379 systemd[1]: Created slice kubepods-besteffort-podca037c08_793f_45a5_a04b_224b130370aa.slice - libcontainer container kubepods-besteffort-podca037c08_793f_45a5_a04b_224b130370aa.slice. Jul 15 23:55:03.173241 kubelet[2733]: I0715 23:55:03.173127 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca037c08-793f-45a5-a04b-224b130370aa-xtables-lock\") pod \"kube-proxy-zr24k\" (UID: \"ca037c08-793f-45a5-a04b-224b130370aa\") " pod="kube-system/kube-proxy-zr24k" Jul 15 23:55:03.173241 kubelet[2733]: I0715 23:55:03.173197 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca037c08-793f-45a5-a04b-224b130370aa-lib-modules\") pod \"kube-proxy-zr24k\" (UID: \"ca037c08-793f-45a5-a04b-224b130370aa\") " pod="kube-system/kube-proxy-zr24k" Jul 15 23:55:03.173241 kubelet[2733]: I0715 23:55:03.173230 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca037c08-793f-45a5-a04b-224b130370aa-kube-proxy\") pod \"kube-proxy-zr24k\" (UID: \"ca037c08-793f-45a5-a04b-224b130370aa\") " pod="kube-system/kube-proxy-zr24k" Jul 15 23:55:03.173552 kubelet[2733]: I0715 23:55:03.173265 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bn8s\" (UniqueName: \"kubernetes.io/projected/ca037c08-793f-45a5-a04b-224b130370aa-kube-api-access-7bn8s\") pod \"kube-proxy-zr24k\" (UID: \"ca037c08-793f-45a5-a04b-224b130370aa\") " pod="kube-system/kube-proxy-zr24k" Jul 15 23:55:03.258602 systemd[1]: Created slice kubepods-besteffort-pod34a3f5be_3498_4eb4_8dcd_dd84c7b46d78.slice - libcontainer container kubepods-besteffort-pod34a3f5be_3498_4eb4_8dcd_dd84c7b46d78.slice. Jul 15 23:55:03.274043 kubelet[2733]: I0715 23:55:03.273967 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34a3f5be-3498-4eb4-8dcd-dd84c7b46d78-var-lib-calico\") pod \"tigera-operator-747864d56d-6v9db\" (UID: \"34a3f5be-3498-4eb4-8dcd-dd84c7b46d78\") " pod="tigera-operator/tigera-operator-747864d56d-6v9db" Jul 15 23:55:03.274955 kubelet[2733]: I0715 23:55:03.274872 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ptb\" (UniqueName: \"kubernetes.io/projected/34a3f5be-3498-4eb4-8dcd-dd84c7b46d78-kube-api-access-d9ptb\") pod \"tigera-operator-747864d56d-6v9db\" (UID: \"34a3f5be-3498-4eb4-8dcd-dd84c7b46d78\") " pod="tigera-operator/tigera-operator-747864d56d-6v9db" Jul 15 23:55:03.401468 containerd[1534]: time="2025-07-15T23:55:03.400795548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zr24k,Uid:ca037c08-793f-45a5-a04b-224b130370aa,Namespace:kube-system,Attempt:0,}" Jul 15 23:55:03.433977 containerd[1534]: time="2025-07-15T23:55:03.433919694Z" level=info msg="connecting to shim 8650bcae6f04c17cacd142538d6176c32026e10a056d1a7e50aa32a861c21666" address="unix:///run/containerd/s/917c3a3cfb7e41e696d0d7df64a57263c61600d840278a783309061aec85416d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:03.476673 systemd[1]: Started cri-containerd-8650bcae6f04c17cacd142538d6176c32026e10a056d1a7e50aa32a861c21666.scope - libcontainer container 8650bcae6f04c17cacd142538d6176c32026e10a056d1a7e50aa32a861c21666. Jul 15 23:55:03.526190 containerd[1534]: time="2025-07-15T23:55:03.526097461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zr24k,Uid:ca037c08-793f-45a5-a04b-224b130370aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"8650bcae6f04c17cacd142538d6176c32026e10a056d1a7e50aa32a861c21666\"" Jul 15 23:55:03.532094 containerd[1534]: time="2025-07-15T23:55:03.532007276Z" level=info msg="CreateContainer within sandbox \"8650bcae6f04c17cacd142538d6176c32026e10a056d1a7e50aa32a861c21666\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:55:03.551389 containerd[1534]: time="2025-07-15T23:55:03.550484508Z" level=info msg="Container 6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:03.564204 containerd[1534]: time="2025-07-15T23:55:03.564126945Z" level=info msg="CreateContainer within sandbox \"8650bcae6f04c17cacd142538d6176c32026e10a056d1a7e50aa32a861c21666\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c\"" Jul 15 23:55:03.565634 containerd[1534]: time="2025-07-15T23:55:03.565180271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6v9db,Uid:34a3f5be-3498-4eb4-8dcd-dd84c7b46d78,Namespace:tigera-operator,Attempt:0,}" Jul 15 23:55:03.565634 containerd[1534]: time="2025-07-15T23:55:03.565386356Z" level=info msg="StartContainer for \"6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c\"" Jul 15 23:55:03.568745 containerd[1534]: time="2025-07-15T23:55:03.568691909Z" level=info msg="connecting to shim 6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c" address="unix:///run/containerd/s/917c3a3cfb7e41e696d0d7df64a57263c61600d840278a783309061aec85416d" protocol=ttrpc version=3 Jul 15 23:55:03.606158 systemd[1]: Started cri-containerd-6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c.scope - libcontainer container 6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c. Jul 15 23:55:03.614464 containerd[1534]: time="2025-07-15T23:55:03.614372153Z" level=info msg="connecting to shim a86dcdc151efdf17a780d05f869ac76f908322c42b332cdc0c9d66bb26b1798b" address="unix:///run/containerd/s/2495f7ecb7397a50be480d7a88e72e48b9643f06fd1c3ba0c5a9b26ba106099c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:03.661693 systemd[1]: Started cri-containerd-a86dcdc151efdf17a780d05f869ac76f908322c42b332cdc0c9d66bb26b1798b.scope - libcontainer container a86dcdc151efdf17a780d05f869ac76f908322c42b332cdc0c9d66bb26b1798b. Jul 15 23:55:03.732337 containerd[1534]: time="2025-07-15T23:55:03.732232029Z" level=info msg="StartContainer for \"6b5cca059c277a628faeee1cbe1a4b8ec07b0505192c4e223f81100812b9be2c\" returns successfully" Jul 15 23:55:03.795803 containerd[1534]: time="2025-07-15T23:55:03.795645960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-6v9db,Uid:34a3f5be-3498-4eb4-8dcd-dd84c7b46d78,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a86dcdc151efdf17a780d05f869ac76f908322c42b332cdc0c9d66bb26b1798b\"" Jul 15 23:55:03.802175 containerd[1534]: time="2025-07-15T23:55:03.801724903Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 15 23:55:04.185108 kubelet[2733]: I0715 23:55:04.185024 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zr24k" podStartSLOduration=1.184987972 podStartE2EDuration="1.184987972s" podCreationTimestamp="2025-07-15 23:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:55:04.18440662 +0000 UTC m=+7.304990876" watchObservedRunningTime="2025-07-15 23:55:04.184987972 +0000 UTC m=+7.305572224" Jul 15 23:55:04.899720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214475881.mount: Deactivated successfully. Jul 15 23:55:05.922438 containerd[1534]: time="2025-07-15T23:55:05.922370337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:05.923958 containerd[1534]: time="2025-07-15T23:55:05.923898346Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 15 23:55:05.925847 containerd[1534]: time="2025-07-15T23:55:05.925785273Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:05.931347 containerd[1534]: time="2025-07-15T23:55:05.930016060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:05.931347 containerd[1534]: time="2025-07-15T23:55:05.931161717Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.129375379s" Jul 15 23:55:05.931347 containerd[1534]: time="2025-07-15T23:55:05.931208033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 15 23:55:05.935848 containerd[1534]: time="2025-07-15T23:55:05.935798344Z" level=info msg="CreateContainer within sandbox \"a86dcdc151efdf17a780d05f869ac76f908322c42b332cdc0c9d66bb26b1798b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 15 23:55:05.951355 containerd[1534]: time="2025-07-15T23:55:05.950117592Z" level=info msg="Container a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:05.959034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678602824.mount: Deactivated successfully. Jul 15 23:55:05.965244 containerd[1534]: time="2025-07-15T23:55:05.965166149Z" level=info msg="CreateContainer within sandbox \"a86dcdc151efdf17a780d05f869ac76f908322c42b332cdc0c9d66bb26b1798b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64\"" Jul 15 23:55:05.966177 containerd[1534]: time="2025-07-15T23:55:05.966109603Z" level=info msg="StartContainer for \"a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64\"" Jul 15 23:55:05.967555 containerd[1534]: time="2025-07-15T23:55:05.967463673Z" level=info msg="connecting to shim a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64" address="unix:///run/containerd/s/2495f7ecb7397a50be480d7a88e72e48b9643f06fd1c3ba0c5a9b26ba106099c" protocol=ttrpc version=3 Jul 15 23:55:06.007680 systemd[1]: Started cri-containerd-a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64.scope - libcontainer container a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64. Jul 15 23:55:06.067599 containerd[1534]: time="2025-07-15T23:55:06.067525357Z" level=info msg="StartContainer for \"a4cca8ab835f16912549d6d706311c06374e92fc840520b36f2549cc77567e64\" returns successfully" Jul 15 23:55:06.190454 kubelet[2733]: I0715 23:55:06.190241 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-6v9db" podStartSLOduration=1.057190187 podStartE2EDuration="3.190213292s" podCreationTimestamp="2025-07-15 23:55:03 +0000 UTC" firstStartedPulling="2025-07-15 23:55:03.799536266 +0000 UTC m=+6.920120513" lastFinishedPulling="2025-07-15 23:55:05.932559374 +0000 UTC m=+9.053143618" observedRunningTime="2025-07-15 23:55:06.190022367 +0000 UTC m=+9.310606620" watchObservedRunningTime="2025-07-15 23:55:06.190213292 +0000 UTC m=+9.310797545" Jul 15 23:55:08.470483 update_engine[1521]: I20250715 23:55:08.470371 1521 update_attempter.cc:509] Updating boot flags... Jul 15 23:55:14.101428 sudo[1823]: pam_unix(sudo:session): session closed for user root Jul 15 23:55:14.145725 sshd[1822]: Connection closed by 139.178.89.65 port 41292 Jul 15 23:55:14.148623 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:14.161493 systemd[1]: sshd@6-10.128.0.4:22-139.178.89.65:41292.service: Deactivated successfully. Jul 15 23:55:14.170957 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:55:14.172176 systemd[1]: session-7.scope: Consumed 6.166s CPU time, 228.3M memory peak. Jul 15 23:55:14.178668 systemd-logind[1512]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:55:14.183886 systemd-logind[1512]: Removed session 7. Jul 15 23:55:20.126697 systemd[1]: Created slice kubepods-besteffort-podef597237_dc40_4645_8182_cc9fb72c1e3d.slice - libcontainer container kubepods-besteffort-podef597237_dc40_4645_8182_cc9fb72c1e3d.slice. Jul 15 23:55:20.200257 kubelet[2733]: I0715 23:55:20.200059 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef597237-dc40-4645-8182-cc9fb72c1e3d-typha-certs\") pod \"calico-typha-787b6ff7dc-jbvh9\" (UID: \"ef597237-dc40-4645-8182-cc9fb72c1e3d\") " pod="calico-system/calico-typha-787b6ff7dc-jbvh9" Jul 15 23:55:20.200257 kubelet[2733]: I0715 23:55:20.200131 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef597237-dc40-4645-8182-cc9fb72c1e3d-tigera-ca-bundle\") pod \"calico-typha-787b6ff7dc-jbvh9\" (UID: \"ef597237-dc40-4645-8182-cc9fb72c1e3d\") " pod="calico-system/calico-typha-787b6ff7dc-jbvh9" Jul 15 23:55:20.200257 kubelet[2733]: I0715 23:55:20.200171 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4g8\" (UniqueName: \"kubernetes.io/projected/ef597237-dc40-4645-8182-cc9fb72c1e3d-kube-api-access-xb4g8\") pod \"calico-typha-787b6ff7dc-jbvh9\" (UID: \"ef597237-dc40-4645-8182-cc9fb72c1e3d\") " pod="calico-system/calico-typha-787b6ff7dc-jbvh9" Jul 15 23:55:20.437375 containerd[1534]: time="2025-07-15T23:55:20.436741912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787b6ff7dc-jbvh9,Uid:ef597237-dc40-4645-8182-cc9fb72c1e3d,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:20.459961 systemd[1]: Created slice kubepods-besteffort-podd5ad0a85_4fbe_42cf_a471_9213f84935de.slice - libcontainer container kubepods-besteffort-podd5ad0a85_4fbe_42cf_a471_9213f84935de.slice. Jul 15 23:55:20.493734 containerd[1534]: time="2025-07-15T23:55:20.493670308Z" level=info msg="connecting to shim d20c01474126a5d387f0b0f43fe13dfb766b1978380e4b653bb45369f541254a" address="unix:///run/containerd/s/296f599016a39a8c7bb2145ecd0d00cc473126f8ea0f79019e567ee6801b387f" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:20.503341 kubelet[2733]: I0715 23:55:20.503270 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-cni-net-dir\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503547 kubelet[2733]: I0715 23:55:20.503361 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-lib-modules\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503547 kubelet[2733]: I0715 23:55:20.503390 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-var-lib-calico\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503547 kubelet[2733]: I0715 23:55:20.503417 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-policysync\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503547 kubelet[2733]: I0715 23:55:20.503443 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-cni-log-dir\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503547 kubelet[2733]: I0715 23:55:20.503470 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-xtables-lock\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503811 kubelet[2733]: I0715 23:55:20.503499 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-cni-bin-dir\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503811 kubelet[2733]: I0715 23:55:20.503530 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-flexvol-driver-host\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503811 kubelet[2733]: I0715 23:55:20.503555 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spq2h\" (UniqueName: \"kubernetes.io/projected/d5ad0a85-4fbe-42cf-a471-9213f84935de-kube-api-access-spq2h\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503811 kubelet[2733]: I0715 23:55:20.503585 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ad0a85-4fbe-42cf-a471-9213f84935de-tigera-ca-bundle\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.503811 kubelet[2733]: I0715 23:55:20.503615 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d5ad0a85-4fbe-42cf-a471-9213f84935de-node-certs\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.504065 kubelet[2733]: I0715 23:55:20.503652 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d5ad0a85-4fbe-42cf-a471-9213f84935de-var-run-calico\") pod \"calico-node-srjgh\" (UID: \"d5ad0a85-4fbe-42cf-a471-9213f84935de\") " pod="calico-system/calico-node-srjgh" Jul 15 23:55:20.561739 systemd[1]: Started cri-containerd-d20c01474126a5d387f0b0f43fe13dfb766b1978380e4b653bb45369f541254a.scope - libcontainer container d20c01474126a5d387f0b0f43fe13dfb766b1978380e4b653bb45369f541254a. Jul 15 23:55:20.606530 kubelet[2733]: E0715 23:55:20.606464 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.607039 kubelet[2733]: W0715 23:55:20.606831 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.607039 kubelet[2733]: E0715 23:55:20.606932 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.608282 kubelet[2733]: E0715 23:55:20.608221 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.609389 kubelet[2733]: W0715 23:55:20.609355 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.609508 kubelet[2733]: E0715 23:55:20.609397 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.609795 kubelet[2733]: E0715 23:55:20.609756 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.609795 kubelet[2733]: W0715 23:55:20.609781 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.610271 kubelet[2733]: E0715 23:55:20.610083 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.610271 kubelet[2733]: W0715 23:55:20.610100 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.610271 kubelet[2733]: E0715 23:55:20.610117 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.610733 kubelet[2733]: E0715 23:55:20.609824 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.611671 kubelet[2733]: E0715 23:55:20.611560 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.611671 kubelet[2733]: W0715 23:55:20.611593 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.611671 kubelet[2733]: E0715 23:55:20.611624 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.612224 kubelet[2733]: E0715 23:55:20.611984 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.612224 kubelet[2733]: W0715 23:55:20.612001 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.612224 kubelet[2733]: E0715 23:55:20.612021 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.612416 kubelet[2733]: E0715 23:55:20.612366 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.612416 kubelet[2733]: W0715 23:55:20.612382 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.612416 kubelet[2733]: E0715 23:55:20.612401 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.614080 kubelet[2733]: E0715 23:55:20.613861 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.614602 kubelet[2733]: W0715 23:55:20.614456 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.614977 kubelet[2733]: E0715 23:55:20.614795 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.620925 kubelet[2733]: E0715 23:55:20.620774 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.621390 kubelet[2733]: W0715 23:55:20.620987 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.621390 kubelet[2733]: E0715 23:55:20.621030 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.630490 kubelet[2733]: E0715 23:55:20.630449 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.630490 kubelet[2733]: W0715 23:55:20.630487 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.630736 kubelet[2733]: E0715 23:55:20.630521 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.660048 kubelet[2733]: E0715 23:55:20.660008 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.660048 kubelet[2733]: W0715 23:55:20.660039 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.660279 kubelet[2733]: E0715 23:55:20.660071 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.730449 kubelet[2733]: E0715 23:55:20.729056 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:20.735515 kubelet[2733]: I0715 23:55:20.735442 2733 status_manager.go:890] "Failed to get status for pod" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" pod="calico-system/csi-node-driver-p4nkh" err="pods \"csi-node-driver-p4nkh\" is forbidden: User \"system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object" Jul 15 23:55:20.785293 kubelet[2733]: E0715 23:55:20.785220 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.785833 kubelet[2733]: W0715 23:55:20.785463 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.785833 kubelet[2733]: E0715 23:55:20.785624 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.787805 kubelet[2733]: E0715 23:55:20.787576 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.787805 kubelet[2733]: W0715 23:55:20.787609 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.787805 kubelet[2733]: E0715 23:55:20.787641 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.788978 kubelet[2733]: E0715 23:55:20.788813 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.788978 kubelet[2733]: W0715 23:55:20.788846 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.788978 kubelet[2733]: E0715 23:55:20.788875 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.790107 kubelet[2733]: E0715 23:55:20.790023 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.790446 kubelet[2733]: W0715 23:55:20.790050 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.790446 kubelet[2733]: E0715 23:55:20.790204 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.791907 kubelet[2733]: E0715 23:55:20.791461 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.791907 kubelet[2733]: W0715 23:55:20.791525 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.792255 kubelet[2733]: E0715 23:55:20.792116 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.793147 kubelet[2733]: E0715 23:55:20.793108 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.793303 kubelet[2733]: W0715 23:55:20.793279 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.793590 kubelet[2733]: E0715 23:55:20.793561 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.794350 kubelet[2733]: E0715 23:55:20.794243 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.794350 kubelet[2733]: W0715 23:55:20.794269 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.794939 kubelet[2733]: E0715 23:55:20.794305 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.795643 kubelet[2733]: E0715 23:55:20.795360 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.795782 kubelet[2733]: W0715 23:55:20.795759 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.796138 kubelet[2733]: E0715 23:55:20.796074 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.798193 kubelet[2733]: E0715 23:55:20.798088 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.798607 kubelet[2733]: W0715 23:55:20.798268 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.798607 kubelet[2733]: E0715 23:55:20.798297 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.799169 containerd[1534]: time="2025-07-15T23:55:20.799123692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srjgh,Uid:d5ad0a85-4fbe-42cf-a471-9213f84935de,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:20.799767 kubelet[2733]: E0715 23:55:20.799740 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.800038 kubelet[2733]: W0715 23:55:20.799920 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.800279 kubelet[2733]: E0715 23:55:20.800164 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.801525 kubelet[2733]: E0715 23:55:20.801270 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.801525 kubelet[2733]: W0715 23:55:20.801294 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.801525 kubelet[2733]: E0715 23:55:20.801429 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.804674 kubelet[2733]: E0715 23:55:20.804619 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.804674 kubelet[2733]: W0715 23:55:20.804675 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.805052 kubelet[2733]: E0715 23:55:20.804704 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.806432 kubelet[2733]: E0715 23:55:20.806392 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.806432 kubelet[2733]: W0715 23:55:20.806421 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.806629 kubelet[2733]: E0715 23:55:20.806447 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.806808 kubelet[2733]: E0715 23:55:20.806752 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.806808 kubelet[2733]: W0715 23:55:20.806771 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.807110 kubelet[2733]: E0715 23:55:20.806829 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.807564 kubelet[2733]: E0715 23:55:20.807490 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.807564 kubelet[2733]: W0715 23:55:20.807509 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.807564 kubelet[2733]: E0715 23:55:20.807528 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.808740 kubelet[2733]: E0715 23:55:20.808708 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.808740 kubelet[2733]: W0715 23:55:20.808735 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.809288 kubelet[2733]: E0715 23:55:20.808754 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.811642 kubelet[2733]: E0715 23:55:20.811584 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.811642 kubelet[2733]: W0715 23:55:20.811614 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.811642 kubelet[2733]: E0715 23:55:20.811638 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.812362 kubelet[2733]: E0715 23:55:20.812291 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.812556 kubelet[2733]: W0715 23:55:20.812437 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.812556 kubelet[2733]: E0715 23:55:20.812463 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.813439 kubelet[2733]: E0715 23:55:20.813372 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.813439 kubelet[2733]: W0715 23:55:20.813394 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.814362 kubelet[2733]: E0715 23:55:20.814009 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.814554 kubelet[2733]: E0715 23:55:20.814535 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.814768 kubelet[2733]: W0715 23:55:20.814677 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.815205 kubelet[2733]: E0715 23:55:20.814928 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.818256 kubelet[2733]: E0715 23:55:20.818230 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.818707 kubelet[2733]: W0715 23:55:20.818500 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.820232 kubelet[2733]: E0715 23:55:20.819575 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.820232 kubelet[2733]: I0715 23:55:20.819629 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cq4r\" (UniqueName: \"kubernetes.io/projected/780f5aa2-cfc2-4e2b-9f15-499cb6a49a94-kube-api-access-4cq4r\") pod \"csi-node-driver-p4nkh\" (UID: \"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94\") " pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:20.822433 kubelet[2733]: E0715 23:55:20.822295 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.822433 kubelet[2733]: W0715 23:55:20.822386 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.823446 kubelet[2733]: E0715 23:55:20.822613 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.823446 kubelet[2733]: I0715 23:55:20.822729 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/780f5aa2-cfc2-4e2b-9f15-499cb6a49a94-registration-dir\") pod \"csi-node-driver-p4nkh\" (UID: \"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94\") " pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:20.823948 kubelet[2733]: E0715 23:55:20.823779 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.823948 kubelet[2733]: W0715 23:55:20.823804 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.825166 kubelet[2733]: E0715 23:55:20.823829 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.825166 kubelet[2733]: I0715 23:55:20.824290 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/780f5aa2-cfc2-4e2b-9f15-499cb6a49a94-kubelet-dir\") pod \"csi-node-driver-p4nkh\" (UID: \"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94\") " pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:20.829341 containerd[1534]: time="2025-07-15T23:55:20.828094630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787b6ff7dc-jbvh9,Uid:ef597237-dc40-4645-8182-cc9fb72c1e3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d20c01474126a5d387f0b0f43fe13dfb766b1978380e4b653bb45369f541254a\"" Jul 15 23:55:20.829502 kubelet[2733]: E0715 23:55:20.828853 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.829917 kubelet[2733]: W0715 23:55:20.828879 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.829917 kubelet[2733]: E0715 23:55:20.829756 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.831943 kubelet[2733]: E0715 23:55:20.831637 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.831943 kubelet[2733]: W0715 23:55:20.831663 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.832704 kubelet[2733]: E0715 23:55:20.832181 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.834408 kubelet[2733]: E0715 23:55:20.834346 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.834408 kubelet[2733]: W0715 23:55:20.834371 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.837089 kubelet[2733]: E0715 23:55:20.837019 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.837089 kubelet[2733]: W0715 23:55:20.837048 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.839056 kubelet[2733]: E0715 23:55:20.839028 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.839593 kubelet[2733]: W0715 23:55:20.839565 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.839801 kubelet[2733]: E0715 23:55:20.839738 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.840075 kubelet[2733]: I0715 23:55:20.839785 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/780f5aa2-cfc2-4e2b-9f15-499cb6a49a94-varrun\") pod \"csi-node-driver-p4nkh\" (UID: \"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94\") " pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:20.840713 kubelet[2733]: E0715 23:55:20.839520 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.843272 kubelet[2733]: E0715 23:55:20.839540 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.843272 kubelet[2733]: E0715 23:55:20.842864 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.843272 kubelet[2733]: W0715 23:55:20.842916 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.843272 kubelet[2733]: E0715 23:55:20.842945 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.843272 kubelet[2733]: I0715 23:55:20.842990 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/780f5aa2-cfc2-4e2b-9f15-499cb6a49a94-socket-dir\") pod \"csi-node-driver-p4nkh\" (UID: \"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94\") " pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:20.847549 kubelet[2733]: E0715 23:55:20.847375 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.847549 kubelet[2733]: W0715 23:55:20.847410 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.848347 kubelet[2733]: E0715 23:55:20.848190 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.850753 kubelet[2733]: E0715 23:55:20.850697 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.851184 kubelet[2733]: W0715 23:55:20.851029 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.851465 kubelet[2733]: E0715 23:55:20.851283 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.854885 kubelet[2733]: E0715 23:55:20.854845 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.856679 kubelet[2733]: W0715 23:55:20.856442 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.856679 kubelet[2733]: E0715 23:55:20.856499 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.857648 kubelet[2733]: E0715 23:55:20.857571 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.857648 kubelet[2733]: W0715 23:55:20.857599 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.860386 kubelet[2733]: E0715 23:55:20.857625 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.861596 kubelet[2733]: E0715 23:55:20.861440 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.861596 kubelet[2733]: W0715 23:55:20.861470 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.861596 kubelet[2733]: E0715 23:55:20.861501 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.863037 containerd[1534]: time="2025-07-15T23:55:20.862950034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 15 23:55:20.863518 kubelet[2733]: E0715 23:55:20.863446 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.863518 kubelet[2733]: W0715 23:55:20.863470 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.863518 kubelet[2733]: E0715 23:55:20.863499 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.886336 containerd[1534]: time="2025-07-15T23:55:20.886069366Z" level=info msg="connecting to shim 82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520" address="unix:///run/containerd/s/1f461db81c09d3fa915a94b79aeb7d4e01a1552358d4fba4d866da6494d6e2c4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:20.946647 systemd[1]: Started cri-containerd-82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520.scope - libcontainer container 82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520. Jul 15 23:55:20.955494 kubelet[2733]: E0715 23:55:20.955454 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.956122 kubelet[2733]: W0715 23:55:20.955725 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.956122 kubelet[2733]: E0715 23:55:20.955771 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.960342 kubelet[2733]: E0715 23:55:20.959432 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.960342 kubelet[2733]: W0715 23:55:20.959463 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.960342 kubelet[2733]: E0715 23:55:20.959998 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.960622 kubelet[2733]: E0715 23:55:20.960463 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.960622 kubelet[2733]: W0715 23:55:20.960481 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.960622 kubelet[2733]: E0715 23:55:20.960554 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.962264 kubelet[2733]: E0715 23:55:20.960994 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.962264 kubelet[2733]: W0715 23:55:20.961042 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.962264 kubelet[2733]: E0715 23:55:20.961063 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.962264 kubelet[2733]: E0715 23:55:20.961478 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.962264 kubelet[2733]: W0715 23:55:20.961521 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.962264 kubelet[2733]: E0715 23:55:20.961540 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.962264 kubelet[2733]: E0715 23:55:20.962139 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.962264 kubelet[2733]: W0715 23:55:20.962158 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.962264 kubelet[2733]: E0715 23:55:20.962203 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.962831 kubelet[2733]: E0715 23:55:20.962701 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.962831 kubelet[2733]: W0715 23:55:20.962750 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.962957 kubelet[2733]: E0715 23:55:20.962785 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.963574 kubelet[2733]: E0715 23:55:20.963446 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.963574 kubelet[2733]: W0715 23:55:20.963465 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.963574 kubelet[2733]: E0715 23:55:20.963489 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.964278 kubelet[2733]: E0715 23:55:20.964180 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.964278 kubelet[2733]: W0715 23:55:20.964213 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.964278 kubelet[2733]: E0715 23:55:20.964251 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.965494 kubelet[2733]: E0715 23:55:20.964653 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.965494 kubelet[2733]: W0715 23:55:20.964668 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.965494 kubelet[2733]: E0715 23:55:20.964703 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.965494 kubelet[2733]: E0715 23:55:20.964980 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.965494 kubelet[2733]: W0715 23:55:20.964994 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.965494 kubelet[2733]: E0715 23:55:20.965213 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.965494 kubelet[2733]: E0715 23:55:20.965394 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.965494 kubelet[2733]: W0715 23:55:20.965407 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.965494 kubelet[2733]: E0715 23:55:20.965440 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.965983 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.967493 kubelet[2733]: W0715 23:55:20.966004 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.966027 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.966438 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.967493 kubelet[2733]: W0715 23:55:20.966461 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.966477 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.966908 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.967493 kubelet[2733]: W0715 23:55:20.966925 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.966942 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.967493 kubelet[2733]: E0715 23:55:20.967275 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.969525 kubelet[2733]: W0715 23:55:20.967301 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.969525 kubelet[2733]: E0715 23:55:20.967330 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.969525 kubelet[2733]: E0715 23:55:20.967680 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.969525 kubelet[2733]: W0715 23:55:20.967695 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.969525 kubelet[2733]: E0715 23:55:20.967718 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.971102 kubelet[2733]: E0715 23:55:20.970788 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.971102 kubelet[2733]: W0715 23:55:20.970824 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.971102 kubelet[2733]: E0715 23:55:20.970882 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.972445 kubelet[2733]: E0715 23:55:20.972422 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.972600 kubelet[2733]: W0715 23:55:20.972576 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.972763 kubelet[2733]: E0715 23:55:20.972718 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.973199 kubelet[2733]: E0715 23:55:20.973141 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.973199 kubelet[2733]: W0715 23:55:20.973163 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.973637 kubelet[2733]: E0715 23:55:20.973553 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.974161 kubelet[2733]: E0715 23:55:20.974066 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.974161 kubelet[2733]: W0715 23:55:20.974097 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.974933 kubelet[2733]: E0715 23:55:20.974356 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.975491 kubelet[2733]: E0715 23:55:20.975447 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.975491 kubelet[2733]: W0715 23:55:20.975466 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.975830 kubelet[2733]: E0715 23:55:20.975734 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.976573 kubelet[2733]: E0715 23:55:20.976517 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.976573 kubelet[2733]: W0715 23:55:20.976538 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.977193 kubelet[2733]: E0715 23:55:20.976829 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.977649 kubelet[2733]: E0715 23:55:20.977630 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.977780 kubelet[2733]: W0715 23:55:20.977752 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.978039 kubelet[2733]: E0715 23:55:20.977987 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:20.978520 kubelet[2733]: E0715 23:55:20.978433 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:20.978520 kubelet[2733]: W0715 23:55:20.978452 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:20.978520 kubelet[2733]: E0715 23:55:20.978470 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:21.024404 kubelet[2733]: E0715 23:55:21.022709 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:21.024404 kubelet[2733]: W0715 23:55:21.022745 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:21.024404 kubelet[2733]: E0715 23:55:21.022778 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:21.099286 containerd[1534]: time="2025-07-15T23:55:21.099147714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-srjgh,Uid:d5ad0a85-4fbe-42cf-a471-9213f84935de,Namespace:calico-system,Attempt:0,} returns sandbox id \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\"" Jul 15 23:55:21.943767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862410152.mount: Deactivated successfully. Jul 15 23:55:22.092623 kubelet[2733]: E0715 23:55:22.091937 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:23.250933 containerd[1534]: time="2025-07-15T23:55:23.250851000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:23.252741 containerd[1534]: time="2025-07-15T23:55:23.252670617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 15 23:55:23.254820 containerd[1534]: time="2025-07-15T23:55:23.254738579Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:23.258987 containerd[1534]: time="2025-07-15T23:55:23.258502266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:23.263264 containerd[1534]: time="2025-07-15T23:55:23.263191774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.400183766s" Jul 15 23:55:23.263264 containerd[1534]: time="2025-07-15T23:55:23.263246757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 15 23:55:23.269728 containerd[1534]: time="2025-07-15T23:55:23.269584834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 15 23:55:23.311604 containerd[1534]: time="2025-07-15T23:55:23.311538419Z" level=info msg="CreateContainer within sandbox \"d20c01474126a5d387f0b0f43fe13dfb766b1978380e4b653bb45369f541254a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 15 23:55:23.332386 containerd[1534]: time="2025-07-15T23:55:23.329201923Z" level=info msg="Container f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:23.339663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652083733.mount: Deactivated successfully. Jul 15 23:55:23.362857 containerd[1534]: time="2025-07-15T23:55:23.362604384Z" level=info msg="CreateContainer within sandbox \"d20c01474126a5d387f0b0f43fe13dfb766b1978380e4b653bb45369f541254a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c\"" Jul 15 23:55:23.363857 containerd[1534]: time="2025-07-15T23:55:23.363802663Z" level=info msg="StartContainer for \"f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c\"" Jul 15 23:55:23.365229 containerd[1534]: time="2025-07-15T23:55:23.365162879Z" level=info msg="connecting to shim f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c" address="unix:///run/containerd/s/296f599016a39a8c7bb2145ecd0d00cc473126f8ea0f79019e567ee6801b387f" protocol=ttrpc version=3 Jul 15 23:55:23.413719 systemd[1]: Started cri-containerd-f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c.scope - libcontainer container f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c. Jul 15 23:55:23.510911 containerd[1534]: time="2025-07-15T23:55:23.510666827Z" level=info msg="StartContainer for \"f26720340e3e34ffcd847847f4245ea1fce2e3b8b27ff8397e91bb6e63add77c\" returns successfully" Jul 15 23:55:24.091886 kubelet[2733]: E0715 23:55:24.091818 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:24.310305 kubelet[2733]: I0715 23:55:24.310186 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-787b6ff7dc-jbvh9" podStartSLOduration=1.897929812 podStartE2EDuration="4.310158652s" podCreationTimestamp="2025-07-15 23:55:20 +0000 UTC" firstStartedPulling="2025-07-15 23:55:20.854688754 +0000 UTC m=+23.975273000" lastFinishedPulling="2025-07-15 23:55:23.266917589 +0000 UTC m=+26.387501840" observedRunningTime="2025-07-15 23:55:24.285067369 +0000 UTC m=+27.405651623" watchObservedRunningTime="2025-07-15 23:55:24.310158652 +0000 UTC m=+27.430742905" Jul 15 23:55:24.314962 containerd[1534]: time="2025-07-15T23:55:24.314496478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:24.317267 containerd[1534]: time="2025-07-15T23:55:24.317217334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 15 23:55:24.318825 containerd[1534]: time="2025-07-15T23:55:24.318736262Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:24.324260 containerd[1534]: time="2025-07-15T23:55:24.322678745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:24.325027 containerd[1534]: time="2025-07-15T23:55:24.324728516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.054682864s" Jul 15 23:55:24.325477 containerd[1534]: time="2025-07-15T23:55:24.325433860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 15 23:55:24.330970 containerd[1534]: time="2025-07-15T23:55:24.330902068Z" level=info msg="CreateContainer within sandbox \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 15 23:55:24.342193 kubelet[2733]: E0715 23:55:24.342060 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.342465 kubelet[2733]: W0715 23:55:24.342432 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.342808 kubelet[2733]: E0715 23:55:24.342625 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.343265 kubelet[2733]: E0715 23:55:24.343243 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.343628 kubelet[2733]: W0715 23:55:24.343413 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.343628 kubelet[2733]: E0715 23:55:24.343446 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.343950 kubelet[2733]: E0715 23:55:24.343930 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.344240 kubelet[2733]: W0715 23:55:24.344055 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.344240 kubelet[2733]: E0715 23:55:24.344085 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.346128 kubelet[2733]: E0715 23:55:24.345732 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.346128 kubelet[2733]: W0715 23:55:24.345754 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.346128 kubelet[2733]: E0715 23:55:24.345815 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.350336 kubelet[2733]: E0715 23:55:24.347873 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.350473 containerd[1534]: time="2025-07-15T23:55:24.348019794Z" level=info msg="Container 84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:24.350551 kubelet[2733]: W0715 23:55:24.347899 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.350551 kubelet[2733]: E0715 23:55:24.350404 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.350884 kubelet[2733]: E0715 23:55:24.350861 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.350983 kubelet[2733]: W0715 23:55:24.350884 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.350983 kubelet[2733]: E0715 23:55:24.350904 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.351587 kubelet[2733]: E0715 23:55:24.351563 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.351587 kubelet[2733]: W0715 23:55:24.351586 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.351745 kubelet[2733]: E0715 23:55:24.351605 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.353557 kubelet[2733]: E0715 23:55:24.353497 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.353557 kubelet[2733]: W0715 23:55:24.353533 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.353557 kubelet[2733]: E0715 23:55:24.353557 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.359164 kubelet[2733]: E0715 23:55:24.358242 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.359164 kubelet[2733]: W0715 23:55:24.358280 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.359412 kubelet[2733]: E0715 23:55:24.359365 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.360992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621716055.mount: Deactivated successfully. Jul 15 23:55:24.361504 kubelet[2733]: E0715 23:55:24.361472 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.361504 kubelet[2733]: W0715 23:55:24.361495 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.361642 kubelet[2733]: E0715 23:55:24.361523 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.364748 kubelet[2733]: E0715 23:55:24.364394 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.364748 kubelet[2733]: W0715 23:55:24.364420 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.364748 kubelet[2733]: E0715 23:55:24.364447 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.365002 kubelet[2733]: E0715 23:55:24.364834 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.365002 kubelet[2733]: W0715 23:55:24.364849 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.365002 kubelet[2733]: E0715 23:55:24.364870 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.366638 kubelet[2733]: E0715 23:55:24.366596 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.366638 kubelet[2733]: W0715 23:55:24.366618 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.366813 kubelet[2733]: E0715 23:55:24.366641 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.367949 kubelet[2733]: E0715 23:55:24.367712 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.367949 kubelet[2733]: W0715 23:55:24.367734 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.367949 kubelet[2733]: E0715 23:55:24.367753 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.368328 kubelet[2733]: E0715 23:55:24.368189 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.368328 kubelet[2733]: W0715 23:55:24.368207 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.368328 kubelet[2733]: E0715 23:55:24.368225 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.379747 containerd[1534]: time="2025-07-15T23:55:24.379521782Z" level=info msg="CreateContainer within sandbox \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\"" Jul 15 23:55:24.380555 containerd[1534]: time="2025-07-15T23:55:24.380395186Z" level=info msg="StartContainer for \"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\"" Jul 15 23:55:24.383195 containerd[1534]: time="2025-07-15T23:55:24.383148711Z" level=info msg="connecting to shim 84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08" address="unix:///run/containerd/s/1f461db81c09d3fa915a94b79aeb7d4e01a1552358d4fba4d866da6494d6e2c4" protocol=ttrpc version=3 Jul 15 23:55:24.396245 kubelet[2733]: E0715 23:55:24.396190 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.396245 kubelet[2733]: W0715 23:55:24.396243 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.397434 kubelet[2733]: E0715 23:55:24.396279 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.397434 kubelet[2733]: E0715 23:55:24.396998 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.397434 kubelet[2733]: W0715 23:55:24.397015 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.397434 kubelet[2733]: E0715 23:55:24.397240 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.397434 kubelet[2733]: E0715 23:55:24.397433 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.397778 kubelet[2733]: W0715 23:55:24.397447 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.397778 kubelet[2733]: E0715 23:55:24.397469 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.397890 kubelet[2733]: E0715 23:55:24.397868 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.397890 kubelet[2733]: W0715 23:55:24.397883 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.397995 kubelet[2733]: E0715 23:55:24.397930 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.399634 kubelet[2733]: E0715 23:55:24.399579 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.399634 kubelet[2733]: W0715 23:55:24.399600 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.400470 kubelet[2733]: E0715 23:55:24.400404 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.400934 kubelet[2733]: E0715 23:55:24.400906 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.400934 kubelet[2733]: W0715 23:55:24.400932 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.401281 kubelet[2733]: E0715 23:55:24.401187 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.401474 kubelet[2733]: E0715 23:55:24.401302 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.401474 kubelet[2733]: W0715 23:55:24.401342 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.401474 kubelet[2733]: E0715 23:55:24.401434 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.402139 kubelet[2733]: E0715 23:55:24.401697 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.402139 kubelet[2733]: W0715 23:55:24.401712 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.402139 kubelet[2733]: E0715 23:55:24.401882 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.402494 kubelet[2733]: E0715 23:55:24.402475 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.402676 kubelet[2733]: W0715 23:55:24.402608 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.402676 kubelet[2733]: E0715 23:55:24.402645 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.403585 kubelet[2733]: E0715 23:55:24.403527 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.403585 kubelet[2733]: W0715 23:55:24.403547 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.403829 kubelet[2733]: E0715 23:55:24.403761 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.404337 kubelet[2733]: E0715 23:55:24.404233 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.404337 kubelet[2733]: W0715 23:55:24.404270 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.404674 kubelet[2733]: E0715 23:55:24.404628 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.404979 kubelet[2733]: E0715 23:55:24.404936 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.404979 kubelet[2733]: W0715 23:55:24.404957 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.405287 kubelet[2733]: E0715 23:55:24.405254 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.405626 kubelet[2733]: E0715 23:55:24.405608 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.405803 kubelet[2733]: W0715 23:55:24.405753 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.406110 kubelet[2733]: E0715 23:55:24.406061 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.406975 kubelet[2733]: E0715 23:55:24.406955 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.407238 kubelet[2733]: W0715 23:55:24.407122 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.407728 kubelet[2733]: E0715 23:55:24.407582 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.410372 kubelet[2733]: E0715 23:55:24.410076 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.410372 kubelet[2733]: W0715 23:55:24.410101 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.410372 kubelet[2733]: E0715 23:55:24.410125 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.411547 kubelet[2733]: E0715 23:55:24.410859 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.411547 kubelet[2733]: W0715 23:55:24.410886 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.411547 kubelet[2733]: E0715 23:55:24.410906 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.415333 kubelet[2733]: E0715 23:55:24.415207 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.415333 kubelet[2733]: W0715 23:55:24.415237 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.417942 kubelet[2733]: E0715 23:55:24.417733 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.422166 kubelet[2733]: E0715 23:55:24.422095 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 15 23:55:24.423328 kubelet[2733]: W0715 23:55:24.423222 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 15 23:55:24.423567 kubelet[2733]: E0715 23:55:24.423273 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 15 23:55:24.434628 systemd[1]: Started cri-containerd-84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08.scope - libcontainer container 84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08. Jul 15 23:55:24.510073 containerd[1534]: time="2025-07-15T23:55:24.510018314Z" level=info msg="StartContainer for \"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\" returns successfully" Jul 15 23:55:24.528025 systemd[1]: cri-containerd-84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08.scope: Deactivated successfully. Jul 15 23:55:24.533515 containerd[1534]: time="2025-07-15T23:55:24.533447935Z" level=info msg="received exit event container_id:\"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\" id:\"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\" pid:3432 exited_at:{seconds:1752623724 nanos:532786539}" Jul 15 23:55:24.533680 containerd[1534]: time="2025-07-15T23:55:24.533475054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\" id:\"84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08\" pid:3432 exited_at:{seconds:1752623724 nanos:532786539}" Jul 15 23:55:24.572713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84ee6ba9281b7983f79860ab6ccf3bee71c9ad808e847e56874ce447d21d0a08-rootfs.mount: Deactivated successfully. Jul 15 23:55:26.091085 kubelet[2733]: E0715 23:55:26.091021 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:26.275487 containerd[1534]: time="2025-07-15T23:55:26.275434094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 15 23:55:28.091976 kubelet[2733]: E0715 23:55:28.091903 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:29.780525 containerd[1534]: time="2025-07-15T23:55:29.780440939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:29.782137 containerd[1534]: time="2025-07-15T23:55:29.782064345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 15 23:55:29.784565 containerd[1534]: time="2025-07-15T23:55:29.784475695Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:29.789090 containerd[1534]: time="2025-07-15T23:55:29.788998492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:29.790710 containerd[1534]: time="2025-07-15T23:55:29.789611370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.514120055s" Jul 15 23:55:29.790710 containerd[1534]: time="2025-07-15T23:55:29.789660260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 15 23:55:29.795825 containerd[1534]: time="2025-07-15T23:55:29.795769913Z" level=info msg="CreateContainer within sandbox \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 15 23:55:29.811347 containerd[1534]: time="2025-07-15T23:55:29.808008062Z" level=info msg="Container d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:29.822582 containerd[1534]: time="2025-07-15T23:55:29.822510894Z" level=info msg="CreateContainer within sandbox \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\"" Jul 15 23:55:29.823427 containerd[1534]: time="2025-07-15T23:55:29.823386672Z" level=info msg="StartContainer for \"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\"" Jul 15 23:55:29.826108 containerd[1534]: time="2025-07-15T23:55:29.826028811Z" level=info msg="connecting to shim d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7" address="unix:///run/containerd/s/1f461db81c09d3fa915a94b79aeb7d4e01a1552358d4fba4d866da6494d6e2c4" protocol=ttrpc version=3 Jul 15 23:55:29.865643 systemd[1]: Started cri-containerd-d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7.scope - libcontainer container d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7. Jul 15 23:55:29.934787 containerd[1534]: time="2025-07-15T23:55:29.934733501Z" level=info msg="StartContainer for \"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\" returns successfully" Jul 15 23:55:30.091741 kubelet[2733]: E0715 23:55:30.091633 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:31.004189 containerd[1534]: time="2025-07-15T23:55:31.004122083Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:55:31.007655 systemd[1]: cri-containerd-d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7.scope: Deactivated successfully. Jul 15 23:55:31.008306 systemd[1]: cri-containerd-d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7.scope: Consumed 712ms CPU time, 191.4M memory peak, 171.2M written to disk. Jul 15 23:55:31.010893 containerd[1534]: time="2025-07-15T23:55:31.010631607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\" id:\"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\" pid:3493 exited_at:{seconds:1752623731 nanos:10148025}" Jul 15 23:55:31.010893 containerd[1534]: time="2025-07-15T23:55:31.010738289Z" level=info msg="received exit event container_id:\"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\" id:\"d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7\" pid:3493 exited_at:{seconds:1752623731 nanos:10148025}" Jul 15 23:55:31.051834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2a790e8d6943da527c5ae06207d1da79446f57c1963c3692e61587c0e6260e7-rootfs.mount: Deactivated successfully. Jul 15 23:55:31.063794 kubelet[2733]: I0715 23:55:31.063692 2733 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:55:31.139726 systemd[1]: Created slice kubepods-burstable-pod94013cd9_0fca_45b1_8726_fe5dcdfdb4c8.slice - libcontainer container kubepods-burstable-pod94013cd9_0fca_45b1_8726_fe5dcdfdb4c8.slice. Jul 15 23:55:31.144259 kubelet[2733]: I0715 23:55:31.144165 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94013cd9-0fca-45b1-8726-fe5dcdfdb4c8-config-volume\") pod \"coredns-668d6bf9bc-tv2j7\" (UID: \"94013cd9-0fca-45b1-8726-fe5dcdfdb4c8\") " pod="kube-system/coredns-668d6bf9bc-tv2j7" Jul 15 23:55:31.144259 kubelet[2733]: I0715 23:55:31.144221 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shq2j\" (UniqueName: \"kubernetes.io/projected/94013cd9-0fca-45b1-8726-fe5dcdfdb4c8-kube-api-access-shq2j\") pod \"coredns-668d6bf9bc-tv2j7\" (UID: \"94013cd9-0fca-45b1-8726-fe5dcdfdb4c8\") " pod="kube-system/coredns-668d6bf9bc-tv2j7" Jul 15 23:55:31.163608 kubelet[2733]: W0715 23:55:31.163487 2733 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object Jul 15 23:55:31.163608 kubelet[2733]: E0715 23:55:31.163558 2733 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object" logger="UnhandledError" Jul 15 23:55:31.165813 kubelet[2733]: W0715 23:55:31.163620 2733 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object Jul 15 23:55:31.165813 kubelet[2733]: E0715 23:55:31.163638 2733 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object" logger="UnhandledError" Jul 15 23:55:31.170678 systemd[1]: Created slice kubepods-besteffort-podbce2aed8_84c2_485d_9f19_a7c86b1ca9ca.slice - libcontainer container kubepods-besteffort-podbce2aed8_84c2_485d_9f19_a7c86b1ca9ca.slice. Jul 15 23:55:31.186577 kubelet[2733]: W0715 23:55:31.186529 2733 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object Jul 15 23:55:31.187512 kubelet[2733]: E0715 23:55:31.186601 2733 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' and this object" logger="UnhandledError" Jul 15 23:55:31.189693 systemd[1]: Created slice kubepods-besteffort-pod2a95d55b_d55f_4165_b092_4987c1ed994a.slice - libcontainer container kubepods-besteffort-pod2a95d55b_d55f_4165_b092_4987c1ed994a.slice. Jul 15 23:55:31.217327 systemd[1]: Created slice kubepods-besteffort-podfaca8f11_9007_4d01_9d45_1928fcf59378.slice - libcontainer container kubepods-besteffort-podfaca8f11_9007_4d01_9d45_1928fcf59378.slice. Jul 15 23:55:31.242258 systemd[1]: Created slice kubepods-burstable-podf4502152_60dc_474c_8c1a_6ab9eb29ce8c.slice - libcontainer container kubepods-burstable-podf4502152_60dc_474c_8c1a_6ab9eb29ce8c.slice. Jul 15 23:55:31.247419 kubelet[2733]: I0715 23:55:31.247371 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8lbc\" (UniqueName: \"kubernetes.io/projected/faca8f11-9007-4d01-9d45-1928fcf59378-kube-api-access-s8lbc\") pod \"calico-apiserver-5ddbdf8fc-5k246\" (UID: \"faca8f11-9007-4d01-9d45-1928fcf59378\") " pod="calico-apiserver/calico-apiserver-5ddbdf8fc-5k246" Jul 15 23:55:31.247585 kubelet[2733]: I0715 23:55:31.247457 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjv5h\" (UniqueName: \"kubernetes.io/projected/2a95d55b-d55f-4165-b092-4987c1ed994a-kube-api-access-xjv5h\") pod \"whisker-6c4f9df49c-rfcbl\" (UID: \"2a95d55b-d55f-4165-b092-4987c1ed994a\") " pod="calico-system/whisker-6c4f9df49c-rfcbl" Jul 15 23:55:31.247585 kubelet[2733]: I0715 23:55:31.247488 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c4d384c-6618-4b09-b899-3709c4eb3554-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-vbx59\" (UID: \"7c4d384c-6618-4b09-b899-3709c4eb3554\") " pod="calico-system/goldmane-768f4c5c69-vbx59" Jul 15 23:55:31.247585 kubelet[2733]: I0715 23:55:31.247513 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7c4d384c-6618-4b09-b899-3709c4eb3554-goldmane-key-pair\") pod \"goldmane-768f4c5c69-vbx59\" (UID: \"7c4d384c-6618-4b09-b899-3709c4eb3554\") " pod="calico-system/goldmane-768f4c5c69-vbx59" Jul 15 23:55:31.247585 kubelet[2733]: I0715 23:55:31.247544 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-ca-bundle\") pod \"whisker-6c4f9df49c-rfcbl\" (UID: \"2a95d55b-d55f-4165-b092-4987c1ed994a\") " pod="calico-system/whisker-6c4f9df49c-rfcbl" Jul 15 23:55:31.247585 kubelet[2733]: I0715 23:55:31.247569 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszhw\" (UniqueName: \"kubernetes.io/projected/bce2aed8-84c2-485d-9f19-a7c86b1ca9ca-kube-api-access-kszhw\") pod \"calico-kube-controllers-699f8bbf89-4pvf7\" (UID: \"bce2aed8-84c2-485d-9f19-a7c86b1ca9ca\") " pod="calico-system/calico-kube-controllers-699f8bbf89-4pvf7" Jul 15 23:55:31.247875 kubelet[2733]: I0715 23:55:31.247597 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4502152-60dc-474c-8c1a-6ab9eb29ce8c-config-volume\") pod \"coredns-668d6bf9bc-bzlwj\" (UID: \"f4502152-60dc-474c-8c1a-6ab9eb29ce8c\") " pod="kube-system/coredns-668d6bf9bc-bzlwj" Jul 15 23:55:31.247875 kubelet[2733]: I0715 23:55:31.247633 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-backend-key-pair\") pod \"whisker-6c4f9df49c-rfcbl\" (UID: \"2a95d55b-d55f-4165-b092-4987c1ed994a\") " pod="calico-system/whisker-6c4f9df49c-rfcbl" Jul 15 23:55:31.247875 kubelet[2733]: I0715 23:55:31.247661 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bce2aed8-84c2-485d-9f19-a7c86b1ca9ca-tigera-ca-bundle\") pod \"calico-kube-controllers-699f8bbf89-4pvf7\" (UID: \"bce2aed8-84c2-485d-9f19-a7c86b1ca9ca\") " pod="calico-system/calico-kube-controllers-699f8bbf89-4pvf7" Jul 15 23:55:31.247875 kubelet[2733]: I0715 23:55:31.247709 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfcq\" (UniqueName: \"kubernetes.io/projected/f4502152-60dc-474c-8c1a-6ab9eb29ce8c-kube-api-access-sqfcq\") pod \"coredns-668d6bf9bc-bzlwj\" (UID: \"f4502152-60dc-474c-8c1a-6ab9eb29ce8c\") " pod="kube-system/coredns-668d6bf9bc-bzlwj" Jul 15 23:55:31.247875 kubelet[2733]: I0715 23:55:31.247739 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8khv\" (UniqueName: \"kubernetes.io/projected/d172420e-cfb9-4d22-b2f9-122d65ad0fd7-kube-api-access-q8khv\") pod \"calico-apiserver-5ddbdf8fc-dj5r2\" (UID: \"d172420e-cfb9-4d22-b2f9-122d65ad0fd7\") " pod="calico-apiserver/calico-apiserver-5ddbdf8fc-dj5r2" Jul 15 23:55:31.248141 kubelet[2733]: I0715 23:55:31.247828 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/faca8f11-9007-4d01-9d45-1928fcf59378-calico-apiserver-certs\") pod \"calico-apiserver-5ddbdf8fc-5k246\" (UID: \"faca8f11-9007-4d01-9d45-1928fcf59378\") " pod="calico-apiserver/calico-apiserver-5ddbdf8fc-5k246" Jul 15 23:55:31.248141 kubelet[2733]: I0715 23:55:31.247864 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcrg\" (UniqueName: \"kubernetes.io/projected/7c4d384c-6618-4b09-b899-3709c4eb3554-kube-api-access-ffcrg\") pod \"goldmane-768f4c5c69-vbx59\" (UID: \"7c4d384c-6618-4b09-b899-3709c4eb3554\") " pod="calico-system/goldmane-768f4c5c69-vbx59" Jul 15 23:55:31.250361 kubelet[2733]: I0715 23:55:31.249153 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d172420e-cfb9-4d22-b2f9-122d65ad0fd7-calico-apiserver-certs\") pod \"calico-apiserver-5ddbdf8fc-dj5r2\" (UID: \"d172420e-cfb9-4d22-b2f9-122d65ad0fd7\") " pod="calico-apiserver/calico-apiserver-5ddbdf8fc-dj5r2" Jul 15 23:55:31.250361 kubelet[2733]: I0715 23:55:31.249202 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4d384c-6618-4b09-b899-3709c4eb3554-config\") pod \"goldmane-768f4c5c69-vbx59\" (UID: \"7c4d384c-6618-4b09-b899-3709c4eb3554\") " pod="calico-system/goldmane-768f4c5c69-vbx59" Jul 15 23:55:31.262384 systemd[1]: Created slice kubepods-besteffort-pod7c4d384c_6618_4b09_b899_3709c4eb3554.slice - libcontainer container kubepods-besteffort-pod7c4d384c_6618_4b09_b899_3709c4eb3554.slice. Jul 15 23:55:31.283725 systemd[1]: Created slice kubepods-besteffort-podd172420e_cfb9_4d22_b2f9_122d65ad0fd7.slice - libcontainer container kubepods-besteffort-podd172420e_cfb9_4d22_b2f9_122d65ad0fd7.slice. Jul 15 23:55:31.464134 containerd[1534]: time="2025-07-15T23:55:31.463945723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv2j7,Uid:94013cd9-0fca-45b1-8726-fe5dcdfdb4c8,Namespace:kube-system,Attempt:0,}" Jul 15 23:55:31.482834 containerd[1534]: time="2025-07-15T23:55:31.482688149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699f8bbf89-4pvf7,Uid:bce2aed8-84c2-485d-9f19-a7c86b1ca9ca,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:31.530296 containerd[1534]: time="2025-07-15T23:55:31.530131920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-5k246,Uid:faca8f11-9007-4d01-9d45-1928fcf59378,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:55:31.534472 containerd[1534]: time="2025-07-15T23:55:31.534413535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4f9df49c-rfcbl,Uid:2a95d55b-d55f-4165-b092-4987c1ed994a,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:31.553763 containerd[1534]: time="2025-07-15T23:55:31.553695422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bzlwj,Uid:f4502152-60dc-474c-8c1a-6ab9eb29ce8c,Namespace:kube-system,Attempt:0,}" Jul 15 23:55:31.619364 containerd[1534]: time="2025-07-15T23:55:31.618848847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-dj5r2,Uid:d172420e-cfb9-4d22-b2f9-122d65ad0fd7,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:55:31.893239 containerd[1534]: time="2025-07-15T23:55:31.893035029Z" level=error msg="Failed to destroy network for sandbox \"422b23ed8117e53dfc09977f6646bb73de17272609d87b5244099a4180a161d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.896982 containerd[1534]: time="2025-07-15T23:55:31.896533782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-5k246,Uid:faca8f11-9007-4d01-9d45-1928fcf59378,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"422b23ed8117e53dfc09977f6646bb73de17272609d87b5244099a4180a161d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.899368 kubelet[2733]: E0715 23:55:31.898969 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"422b23ed8117e53dfc09977f6646bb73de17272609d87b5244099a4180a161d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.899368 kubelet[2733]: E0715 23:55:31.899110 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"422b23ed8117e53dfc09977f6646bb73de17272609d87b5244099a4180a161d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-5k246" Jul 15 23:55:31.899368 kubelet[2733]: E0715 23:55:31.899145 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"422b23ed8117e53dfc09977f6646bb73de17272609d87b5244099a4180a161d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-5k246" Jul 15 23:55:31.899691 kubelet[2733]: E0715 23:55:31.899207 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddbdf8fc-5k246_calico-apiserver(faca8f11-9007-4d01-9d45-1928fcf59378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddbdf8fc-5k246_calico-apiserver(faca8f11-9007-4d01-9d45-1928fcf59378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"422b23ed8117e53dfc09977f6646bb73de17272609d87b5244099a4180a161d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-5k246" podUID="faca8f11-9007-4d01-9d45-1928fcf59378" Jul 15 23:55:31.918654 containerd[1534]: time="2025-07-15T23:55:31.918431953Z" level=error msg="Failed to destroy network for sandbox \"4531995cab9c4fa791cd2f7ed2352d92dd7889bc423df8f97136367ef6ff75c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.925605 containerd[1534]: time="2025-07-15T23:55:31.925422239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699f8bbf89-4pvf7,Uid:bce2aed8-84c2-485d-9f19-a7c86b1ca9ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4531995cab9c4fa791cd2f7ed2352d92dd7889bc423df8f97136367ef6ff75c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.925959 kubelet[2733]: E0715 23:55:31.925904 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4531995cab9c4fa791cd2f7ed2352d92dd7889bc423df8f97136367ef6ff75c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.926086 kubelet[2733]: E0715 23:55:31.925991 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4531995cab9c4fa791cd2f7ed2352d92dd7889bc423df8f97136367ef6ff75c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699f8bbf89-4pvf7" Jul 15 23:55:31.926086 kubelet[2733]: E0715 23:55:31.926028 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4531995cab9c4fa791cd2f7ed2352d92dd7889bc423df8f97136367ef6ff75c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699f8bbf89-4pvf7" Jul 15 23:55:31.926203 kubelet[2733]: E0715 23:55:31.926108 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-699f8bbf89-4pvf7_calico-system(bce2aed8-84c2-485d-9f19-a7c86b1ca9ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-699f8bbf89-4pvf7_calico-system(bce2aed8-84c2-485d-9f19-a7c86b1ca9ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4531995cab9c4fa791cd2f7ed2352d92dd7889bc423df8f97136367ef6ff75c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699f8bbf89-4pvf7" podUID="bce2aed8-84c2-485d-9f19-a7c86b1ca9ca" Jul 15 23:55:31.949343 containerd[1534]: time="2025-07-15T23:55:31.948269179Z" level=error msg="Failed to destroy network for sandbox \"73e694b053eac7380b9df181ebd35b2b449e738ed3bedc9fb6c07ef0aeca3f55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.952567 containerd[1534]: time="2025-07-15T23:55:31.952494465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv2j7,Uid:94013cd9-0fca-45b1-8726-fe5dcdfdb4c8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e694b053eac7380b9df181ebd35b2b449e738ed3bedc9fb6c07ef0aeca3f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.953444 kubelet[2733]: E0715 23:55:31.953385 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e694b053eac7380b9df181ebd35b2b449e738ed3bedc9fb6c07ef0aeca3f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.953775 kubelet[2733]: E0715 23:55:31.953735 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e694b053eac7380b9df181ebd35b2b449e738ed3bedc9fb6c07ef0aeca3f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tv2j7" Jul 15 23:55:31.955344 kubelet[2733]: E0715 23:55:31.954359 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73e694b053eac7380b9df181ebd35b2b449e738ed3bedc9fb6c07ef0aeca3f55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tv2j7" Jul 15 23:55:31.955344 kubelet[2733]: E0715 23:55:31.954452 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tv2j7_kube-system(94013cd9-0fca-45b1-8726-fe5dcdfdb4c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tv2j7_kube-system(94013cd9-0fca-45b1-8726-fe5dcdfdb4c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73e694b053eac7380b9df181ebd35b2b449e738ed3bedc9fb6c07ef0aeca3f55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tv2j7" podUID="94013cd9-0fca-45b1-8726-fe5dcdfdb4c8" Jul 15 23:55:31.984429 containerd[1534]: time="2025-07-15T23:55:31.984368548Z" level=error msg="Failed to destroy network for sandbox \"5f1a7e7f026980f45bc979dc5f1a421080def8f0d331898d0400729c9ccefdc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.986242 containerd[1534]: time="2025-07-15T23:55:31.986185851Z" level=error msg="Failed to destroy network for sandbox \"3db33d0afb9294a91dbae3dd84f9b504931e1bf2340a67f1b6d8e27bf663c4e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.987153 containerd[1534]: time="2025-07-15T23:55:31.987102251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4f9df49c-rfcbl,Uid:2a95d55b-d55f-4165-b092-4987c1ed994a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1a7e7f026980f45bc979dc5f1a421080def8f0d331898d0400729c9ccefdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.987951 kubelet[2733]: E0715 23:55:31.987899 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1a7e7f026980f45bc979dc5f1a421080def8f0d331898d0400729c9ccefdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.988066 kubelet[2733]: E0715 23:55:31.987982 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1a7e7f026980f45bc979dc5f1a421080def8f0d331898d0400729c9ccefdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c4f9df49c-rfcbl" Jul 15 23:55:31.988066 kubelet[2733]: E0715 23:55:31.988035 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1a7e7f026980f45bc979dc5f1a421080def8f0d331898d0400729c9ccefdc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c4f9df49c-rfcbl" Jul 15 23:55:31.988210 kubelet[2733]: E0715 23:55:31.988109 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c4f9df49c-rfcbl_calico-system(2a95d55b-d55f-4165-b092-4987c1ed994a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c4f9df49c-rfcbl_calico-system(2a95d55b-d55f-4165-b092-4987c1ed994a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f1a7e7f026980f45bc979dc5f1a421080def8f0d331898d0400729c9ccefdc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c4f9df49c-rfcbl" podUID="2a95d55b-d55f-4165-b092-4987c1ed994a" Jul 15 23:55:31.989523 containerd[1534]: time="2025-07-15T23:55:31.989246753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-dj5r2,Uid:d172420e-cfb9-4d22-b2f9-122d65ad0fd7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db33d0afb9294a91dbae3dd84f9b504931e1bf2340a67f1b6d8e27bf663c4e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.990422 kubelet[2733]: E0715 23:55:31.990350 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db33d0afb9294a91dbae3dd84f9b504931e1bf2340a67f1b6d8e27bf663c4e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:31.990605 kubelet[2733]: E0715 23:55:31.990430 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db33d0afb9294a91dbae3dd84f9b504931e1bf2340a67f1b6d8e27bf663c4e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-dj5r2" Jul 15 23:55:31.990605 kubelet[2733]: E0715 23:55:31.990466 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db33d0afb9294a91dbae3dd84f9b504931e1bf2340a67f1b6d8e27bf663c4e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-dj5r2" Jul 15 23:55:31.990605 kubelet[2733]: E0715 23:55:31.990538 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddbdf8fc-dj5r2_calico-apiserver(d172420e-cfb9-4d22-b2f9-122d65ad0fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddbdf8fc-dj5r2_calico-apiserver(d172420e-cfb9-4d22-b2f9-122d65ad0fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3db33d0afb9294a91dbae3dd84f9b504931e1bf2340a67f1b6d8e27bf663c4e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-dj5r2" podUID="d172420e-cfb9-4d22-b2f9-122d65ad0fd7" Jul 15 23:55:31.998183 containerd[1534]: time="2025-07-15T23:55:31.998102093Z" level=error msg="Failed to destroy network for sandbox \"9a4470d405b6b8d1370a0a2e10f835d9e25deae6d0c6fce49b19d29f04ec05ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:32.000839 containerd[1534]: time="2025-07-15T23:55:32.000743155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bzlwj,Uid:f4502152-60dc-474c-8c1a-6ab9eb29ce8c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4470d405b6b8d1370a0a2e10f835d9e25deae6d0c6fce49b19d29f04ec05ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:32.001447 kubelet[2733]: E0715 23:55:32.001356 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4470d405b6b8d1370a0a2e10f835d9e25deae6d0c6fce49b19d29f04ec05ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:32.001447 kubelet[2733]: E0715 23:55:32.001434 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4470d405b6b8d1370a0a2e10f835d9e25deae6d0c6fce49b19d29f04ec05ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bzlwj" Jul 15 23:55:32.001641 kubelet[2733]: E0715 23:55:32.001468 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4470d405b6b8d1370a0a2e10f835d9e25deae6d0c6fce49b19d29f04ec05ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bzlwj" Jul 15 23:55:32.001641 kubelet[2733]: E0715 23:55:32.001535 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bzlwj_kube-system(f4502152-60dc-474c-8c1a-6ab9eb29ce8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bzlwj_kube-system(f4502152-60dc-474c-8c1a-6ab9eb29ce8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a4470d405b6b8d1370a0a2e10f835d9e25deae6d0c6fce49b19d29f04ec05ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bzlwj" podUID="f4502152-60dc-474c-8c1a-6ab9eb29ce8c" Jul 15 23:55:32.103549 systemd[1]: Created slice kubepods-besteffort-pod780f5aa2_cfc2_4e2b_9f15_499cb6a49a94.slice - libcontainer container kubepods-besteffort-pod780f5aa2_cfc2_4e2b_9f15_499cb6a49a94.slice. Jul 15 23:55:32.109942 containerd[1534]: time="2025-07-15T23:55:32.109877815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4nkh,Uid:780f5aa2-cfc2-4e2b-9f15-499cb6a49a94,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:32.247362 containerd[1534]: time="2025-07-15T23:55:32.247027339Z" level=error msg="Failed to destroy network for sandbox \"4196e7073920d56b1b1980187d75e3497a94c20627769962bc3fbc549fc611a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:32.252713 containerd[1534]: time="2025-07-15T23:55:32.252569275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4nkh,Uid:780f5aa2-cfc2-4e2b-9f15-499cb6a49a94,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4196e7073920d56b1b1980187d75e3497a94c20627769962bc3fbc549fc611a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:32.254472 systemd[1]: run-netns-cni\x2d4128a673\x2da23e\x2deca4\x2da990\x2d61c83562ef25.mount: Deactivated successfully. Jul 15 23:55:32.258339 kubelet[2733]: E0715 23:55:32.258121 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4196e7073920d56b1b1980187d75e3497a94c20627769962bc3fbc549fc611a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:32.258339 kubelet[2733]: E0715 23:55:32.258204 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4196e7073920d56b1b1980187d75e3497a94c20627769962bc3fbc549fc611a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:32.258339 kubelet[2733]: E0715 23:55:32.258237 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4196e7073920d56b1b1980187d75e3497a94c20627769962bc3fbc549fc611a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p4nkh" Jul 15 23:55:32.259468 kubelet[2733]: E0715 23:55:32.258296 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p4nkh_calico-system(780f5aa2-cfc2-4e2b-9f15-499cb6a49a94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p4nkh_calico-system(780f5aa2-cfc2-4e2b-9f15-499cb6a49a94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4196e7073920d56b1b1980187d75e3497a94c20627769962bc3fbc549fc611a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p4nkh" podUID="780f5aa2-cfc2-4e2b-9f15-499cb6a49a94" Jul 15 23:55:32.313874 containerd[1534]: time="2025-07-15T23:55:32.313790433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 15 23:55:32.382267 kubelet[2733]: E0715 23:55:32.382193 2733 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 15 23:55:32.382511 kubelet[2733]: E0715 23:55:32.382399 2733 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c4d384c-6618-4b09-b899-3709c4eb3554-config podName:7c4d384c-6618-4b09-b899-3709c4eb3554 nodeName:}" failed. No retries permitted until 2025-07-15 23:55:32.882365582 +0000 UTC m=+36.002949833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7c4d384c-6618-4b09-b899-3709c4eb3554-config") pod "goldmane-768f4c5c69-vbx59" (UID: "7c4d384c-6618-4b09-b899-3709c4eb3554") : failed to sync configmap cache: timed out waiting for the condition Jul 15 23:55:33.088341 containerd[1534]: time="2025-07-15T23:55:33.088098547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vbx59,Uid:7c4d384c-6618-4b09-b899-3709c4eb3554,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:33.228362 containerd[1534]: time="2025-07-15T23:55:33.227162174Z" level=error msg="Failed to destroy network for sandbox \"b28ba7126880a779fb815743a22a4e7042474ec047b2e5786b005f6307869977\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:33.232041 containerd[1534]: time="2025-07-15T23:55:33.230635055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vbx59,Uid:7c4d384c-6618-4b09-b899-3709c4eb3554,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28ba7126880a779fb815743a22a4e7042474ec047b2e5786b005f6307869977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:33.234361 kubelet[2733]: E0715 23:55:33.232526 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28ba7126880a779fb815743a22a4e7042474ec047b2e5786b005f6307869977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 15 23:55:33.234361 kubelet[2733]: E0715 23:55:33.232652 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28ba7126880a779fb815743a22a4e7042474ec047b2e5786b005f6307869977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vbx59" Jul 15 23:55:33.234361 kubelet[2733]: E0715 23:55:33.232686 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28ba7126880a779fb815743a22a4e7042474ec047b2e5786b005f6307869977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-vbx59" Jul 15 23:55:33.233027 systemd[1]: run-netns-cni\x2d6f8fd4a4\x2da337\x2d8958\x2ddf22\x2d01cddab9da1c.mount: Deactivated successfully. Jul 15 23:55:33.239166 kubelet[2733]: E0715 23:55:33.232769 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-vbx59_calico-system(7c4d384c-6618-4b09-b899-3709c4eb3554)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-vbx59_calico-system(7c4d384c-6618-4b09-b899-3709c4eb3554)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b28ba7126880a779fb815743a22a4e7042474ec047b2e5786b005f6307869977\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-vbx59" podUID="7c4d384c-6618-4b09-b899-3709c4eb3554" Jul 15 23:55:39.582456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780818029.mount: Deactivated successfully. Jul 15 23:55:39.621194 containerd[1534]: time="2025-07-15T23:55:39.621110204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:39.623402 containerd[1534]: time="2025-07-15T23:55:39.623333194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 15 23:55:39.625066 containerd[1534]: time="2025-07-15T23:55:39.624931518Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:39.629916 containerd[1534]: time="2025-07-15T23:55:39.628885900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:39.629916 containerd[1534]: time="2025-07-15T23:55:39.629759258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.315907009s" Jul 15 23:55:39.629916 containerd[1534]: time="2025-07-15T23:55:39.629803018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 15 23:55:39.653087 containerd[1534]: time="2025-07-15T23:55:39.653026818Z" level=info msg="CreateContainer within sandbox \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 15 23:55:39.671164 containerd[1534]: time="2025-07-15T23:55:39.669650969Z" level=info msg="Container 9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:39.685105 containerd[1534]: time="2025-07-15T23:55:39.685033025Z" level=info msg="CreateContainer within sandbox \"82e5ab2bec0a8fc7235e12890de9c351bd580bff18c594f8d4653c4327fa0520\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\"" Jul 15 23:55:39.686370 containerd[1534]: time="2025-07-15T23:55:39.685830742Z" level=info msg="StartContainer for \"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\"" Jul 15 23:55:39.688480 containerd[1534]: time="2025-07-15T23:55:39.688414114Z" level=info msg="connecting to shim 9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36" address="unix:///run/containerd/s/1f461db81c09d3fa915a94b79aeb7d4e01a1552358d4fba4d866da6494d6e2c4" protocol=ttrpc version=3 Jul 15 23:55:39.719767 systemd[1]: Started cri-containerd-9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36.scope - libcontainer container 9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36. Jul 15 23:55:39.791104 containerd[1534]: time="2025-07-15T23:55:39.791015230Z" level=info msg="StartContainer for \"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\" returns successfully" Jul 15 23:55:39.933336 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 15 23:55:39.933535 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 15 23:55:40.125203 kubelet[2733]: I0715 23:55:40.125144 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjv5h\" (UniqueName: \"kubernetes.io/projected/2a95d55b-d55f-4165-b092-4987c1ed994a-kube-api-access-xjv5h\") pod \"2a95d55b-d55f-4165-b092-4987c1ed994a\" (UID: \"2a95d55b-d55f-4165-b092-4987c1ed994a\") " Jul 15 23:55:40.126544 kubelet[2733]: I0715 23:55:40.125937 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-backend-key-pair\") pod \"2a95d55b-d55f-4165-b092-4987c1ed994a\" (UID: \"2a95d55b-d55f-4165-b092-4987c1ed994a\") " Jul 15 23:55:40.128333 kubelet[2733]: I0715 23:55:40.128205 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-ca-bundle\") pod \"2a95d55b-d55f-4165-b092-4987c1ed994a\" (UID: \"2a95d55b-d55f-4165-b092-4987c1ed994a\") " Jul 15 23:55:40.129334 kubelet[2733]: I0715 23:55:40.128962 2733 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2a95d55b-d55f-4165-b092-4987c1ed994a" (UID: "2a95d55b-d55f-4165-b092-4987c1ed994a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:55:40.135009 kubelet[2733]: I0715 23:55:40.134925 2733 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2a95d55b-d55f-4165-b092-4987c1ed994a" (UID: "2a95d55b-d55f-4165-b092-4987c1ed994a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:55:40.135672 kubelet[2733]: I0715 23:55:40.135634 2733 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a95d55b-d55f-4165-b092-4987c1ed994a-kube-api-access-xjv5h" (OuterVolumeSpecName: "kube-api-access-xjv5h") pod "2a95d55b-d55f-4165-b092-4987c1ed994a" (UID: "2a95d55b-d55f-4165-b092-4987c1ed994a"). InnerVolumeSpecName "kube-api-access-xjv5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:55:40.229559 kubelet[2733]: I0715 23:55:40.229509 2733 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjv5h\" (UniqueName: \"kubernetes.io/projected/2a95d55b-d55f-4165-b092-4987c1ed994a-kube-api-access-xjv5h\") on node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" DevicePath \"\"" Jul 15 23:55:40.229559 kubelet[2733]: I0715 23:55:40.229555 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-backend-key-pair\") on node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" DevicePath \"\"" Jul 15 23:55:40.229559 kubelet[2733]: I0715 23:55:40.229573 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a95d55b-d55f-4165-b092-4987c1ed994a-whisker-ca-bundle\") on node \"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e\" DevicePath \"\"" Jul 15 23:55:40.363154 systemd[1]: Removed slice kubepods-besteffort-pod2a95d55b_d55f_4165_b092_4987c1ed994a.slice - libcontainer container kubepods-besteffort-pod2a95d55b_d55f_4165_b092_4987c1ed994a.slice. Jul 15 23:55:40.396928 kubelet[2733]: I0715 23:55:40.394299 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-srjgh" podStartSLOduration=1.8644740789999998 podStartE2EDuration="20.393858511s" podCreationTimestamp="2025-07-15 23:55:20 +0000 UTC" firstStartedPulling="2025-07-15 23:55:21.102088604 +0000 UTC m=+24.222672832" lastFinishedPulling="2025-07-15 23:55:39.631473029 +0000 UTC m=+42.752057264" observedRunningTime="2025-07-15 23:55:40.388030052 +0000 UTC m=+43.508614316" watchObservedRunningTime="2025-07-15 23:55:40.393858511 +0000 UTC m=+43.514442764" Jul 15 23:55:40.500914 systemd[1]: Created slice kubepods-besteffort-pod97619884_455c_4287_b0d8_b033da266798.slice - libcontainer container kubepods-besteffort-pod97619884_455c_4287_b0d8_b033da266798.slice. Jul 15 23:55:40.530862 kubelet[2733]: I0715 23:55:40.530802 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97619884-455c-4287-b0d8-b033da266798-whisker-backend-key-pair\") pod \"whisker-6f9bd8b799-b7ntn\" (UID: \"97619884-455c-4287-b0d8-b033da266798\") " pod="calico-system/whisker-6f9bd8b799-b7ntn" Jul 15 23:55:40.531182 kubelet[2733]: I0715 23:55:40.530877 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97619884-455c-4287-b0d8-b033da266798-whisker-ca-bundle\") pod \"whisker-6f9bd8b799-b7ntn\" (UID: \"97619884-455c-4287-b0d8-b033da266798\") " pod="calico-system/whisker-6f9bd8b799-b7ntn" Jul 15 23:55:40.531182 kubelet[2733]: I0715 23:55:40.530910 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7p7\" (UniqueName: \"kubernetes.io/projected/97619884-455c-4287-b0d8-b033da266798-kube-api-access-5q7p7\") pod \"whisker-6f9bd8b799-b7ntn\" (UID: \"97619884-455c-4287-b0d8-b033da266798\") " pod="calico-system/whisker-6f9bd8b799-b7ntn" Jul 15 23:55:40.583168 systemd[1]: var-lib-kubelet-pods-2a95d55b\x2dd55f\x2d4165\x2db092\x2d4987c1ed994a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjv5h.mount: Deactivated successfully. Jul 15 23:55:40.583364 systemd[1]: var-lib-kubelet-pods-2a95d55b\x2dd55f\x2d4165\x2db092\x2d4987c1ed994a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 15 23:55:40.649458 containerd[1534]: time="2025-07-15T23:55:40.648382165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\" id:\"b59235f9cd8d0c61545ee235acea791ab3395e66e34e3bb4875c66c69ded39b9\" pid:3818 exit_status:1 exited_at:{seconds:1752623740 nanos:647537146}" Jul 15 23:55:40.812065 containerd[1534]: time="2025-07-15T23:55:40.811658636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9bd8b799-b7ntn,Uid:97619884-455c-4287-b0d8-b033da266798,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:40.970769 systemd-networkd[1451]: calie487bf02349: Link UP Jul 15 23:55:40.971081 systemd-networkd[1451]: calie487bf02349: Gained carrier Jul 15 23:55:40.997030 containerd[1534]: 2025-07-15 23:55:40.852 [INFO][3842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 15 23:55:40.997030 containerd[1534]: 2025-07-15 23:55:40.867 [INFO][3842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0 whisker-6f9bd8b799- calico-system 97619884-455c-4287-b0d8-b033da266798 872 0 2025-07-15 23:55:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f9bd8b799 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e whisker-6f9bd8b799-b7ntn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie487bf02349 [] [] }} ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-" Jul 15 23:55:40.997030 containerd[1534]: 2025-07-15 23:55:40.868 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:40.997030 containerd[1534]: 2025-07-15 23:55:40.902 [INFO][3854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" HandleID="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.902 [INFO][3854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" HandleID="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"whisker-6f9bd8b799-b7ntn", "timestamp":"2025-07-15 23:55:40.902690729 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.903 [INFO][3854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.903 [INFO][3854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.903 [INFO][3854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.911 [INFO][3854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.920 [INFO][3854] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.928 [INFO][3854] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.997716 containerd[1534]: 2025-07-15 23:55:40.932 [INFO][3854] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.936 [INFO][3854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.936 [INFO][3854] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.939 [INFO][3854] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.945 [INFO][3854] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.955 [INFO][3854] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.1/26] block=192.168.59.0/26 handle="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.955 [INFO][3854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.1/26] handle="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.955 [INFO][3854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:40.998117 containerd[1534]: 2025-07-15 23:55:40.955 [INFO][3854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.1/26] IPv6=[] ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" HandleID="k8s-pod-network.236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:40.998528 containerd[1534]: 2025-07-15 23:55:40.959 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0", GenerateName:"whisker-6f9bd8b799-", Namespace:"calico-system", SelfLink:"", UID:"97619884-455c-4287-b0d8-b033da266798", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9bd8b799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"whisker-6f9bd8b799-b7ntn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie487bf02349", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:40.998660 containerd[1534]: 2025-07-15 23:55:40.959 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.1/32] ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:40.998660 containerd[1534]: 2025-07-15 23:55:40.959 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie487bf02349 ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:40.998660 containerd[1534]: 2025-07-15 23:55:40.970 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:40.998803 containerd[1534]: 2025-07-15 23:55:40.971 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0", GenerateName:"whisker-6f9bd8b799-", Namespace:"calico-system", SelfLink:"", UID:"97619884-455c-4287-b0d8-b033da266798", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9bd8b799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c", Pod:"whisker-6f9bd8b799-b7ntn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie487bf02349", MAC:"2e:b1:5a:37:6f:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:40.998916 containerd[1534]: 2025-07-15 23:55:40.992 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" Namespace="calico-system" Pod="whisker-6f9bd8b799-b7ntn" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-whisker--6f9bd8b799--b7ntn-eth0" Jul 15 23:55:41.041251 containerd[1534]: time="2025-07-15T23:55:41.039531316Z" level=info msg="connecting to shim 236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c" address="unix:///run/containerd/s/9cacac9eb3e003f763ab7d39e8507b48e089ced04f430b9c9bfa77cb831a5c23" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:41.081645 systemd[1]: Started cri-containerd-236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c.scope - libcontainer container 236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c. Jul 15 23:55:41.097507 kubelet[2733]: I0715 23:55:41.097303 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a95d55b-d55f-4165-b092-4987c1ed994a" path="/var/lib/kubelet/pods/2a95d55b-d55f-4165-b092-4987c1ed994a/volumes" Jul 15 23:55:41.152695 containerd[1534]: time="2025-07-15T23:55:41.152643546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9bd8b799-b7ntn,Uid:97619884-455c-4287-b0d8-b033da266798,Namespace:calico-system,Attempt:0,} returns sandbox id \"236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c\"" Jul 15 23:55:41.155701 containerd[1534]: time="2025-07-15T23:55:41.155520500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 15 23:55:41.454215 containerd[1534]: time="2025-07-15T23:55:41.454150844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\" id:\"b0110fc2423cd6c78e32482a9e47c512945f094afe53d5aed1c89ef7a9b7ba03\" pid:3925 exit_status:1 exited_at:{seconds:1752623741 nanos:453672609}" Jul 15 23:55:42.275603 systemd-networkd[1451]: calie487bf02349: Gained IPv6LL Jul 15 23:55:42.336343 containerd[1534]: time="2025-07-15T23:55:42.334510868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:42.337069 containerd[1534]: time="2025-07-15T23:55:42.337027749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 15 23:55:42.341071 containerd[1534]: time="2025-07-15T23:55:42.340748501Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:42.349578 containerd[1534]: time="2025-07-15T23:55:42.349524648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:42.353582 containerd[1534]: time="2025-07-15T23:55:42.353463947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.197746217s" Jul 15 23:55:42.353582 containerd[1534]: time="2025-07-15T23:55:42.353535896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 15 23:55:42.360436 containerd[1534]: time="2025-07-15T23:55:42.359690440Z" level=info msg="CreateContainer within sandbox \"236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 15 23:55:42.402934 containerd[1534]: time="2025-07-15T23:55:42.402870220Z" level=info msg="Container 4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:42.427570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693756557.mount: Deactivated successfully. Jul 15 23:55:42.434569 containerd[1534]: time="2025-07-15T23:55:42.434481428Z" level=info msg="CreateContainer within sandbox \"236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e\"" Jul 15 23:55:42.436605 containerd[1534]: time="2025-07-15T23:55:42.436560307Z" level=info msg="StartContainer for \"4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e\"" Jul 15 23:55:42.438605 containerd[1534]: time="2025-07-15T23:55:42.438294616Z" level=info msg="connecting to shim 4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e" address="unix:///run/containerd/s/9cacac9eb3e003f763ab7d39e8507b48e089ced04f430b9c9bfa77cb831a5c23" protocol=ttrpc version=3 Jul 15 23:55:42.490661 systemd[1]: Started cri-containerd-4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e.scope - libcontainer container 4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e. Jul 15 23:55:42.659566 containerd[1534]: time="2025-07-15T23:55:42.659485335Z" level=info msg="StartContainer for \"4eba0d1de1bda08ca5d8aa8666dae3f8c51d849ea4deff399e13c949ecd5216e\" returns successfully" Jul 15 23:55:42.663177 containerd[1534]: time="2025-07-15T23:55:42.663058463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 15 23:55:43.020439 systemd-networkd[1451]: vxlan.calico: Link UP Jul 15 23:55:43.020453 systemd-networkd[1451]: vxlan.calico: Gained carrier Jul 15 23:55:43.094300 containerd[1534]: time="2025-07-15T23:55:43.094146697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699f8bbf89-4pvf7,Uid:bce2aed8-84c2-485d-9f19-a7c86b1ca9ca,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:43.320377 systemd-networkd[1451]: cali9ce69a8fde1: Link UP Jul 15 23:55:43.320727 systemd-networkd[1451]: cali9ce69a8fde1: Gained carrier Jul 15 23:55:43.352067 containerd[1534]: 2025-07-15 23:55:43.180 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0 calico-kube-controllers-699f8bbf89- calico-system bce2aed8-84c2-485d-9f19-a7c86b1ca9ca 809 0 2025-07-15 23:55:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699f8bbf89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e calico-kube-controllers-699f8bbf89-4pvf7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9ce69a8fde1 [] [] }} ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-" Jul 15 23:55:43.352067 containerd[1534]: 2025-07-15 23:55:43.181 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.352067 containerd[1534]: 2025-07-15 23:55:43.237 [INFO][4145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" HandleID="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.238 [INFO][4145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" HandleID="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"calico-kube-controllers-699f8bbf89-4pvf7", "timestamp":"2025-07-15 23:55:43.237804264 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.238 [INFO][4145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.238 [INFO][4145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.238 [INFO][4145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.261 [INFO][4145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.271 [INFO][4145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.279 [INFO][4145] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.353072 containerd[1534]: 2025-07-15 23:55:43.282 [INFO][4145] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.286 [INFO][4145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.286 [INFO][4145] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.290 [INFO][4145] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3 Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.299 [INFO][4145] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.311 [INFO][4145] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.2/26] block=192.168.59.0/26 handle="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.311 [INFO][4145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.2/26] handle="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.311 [INFO][4145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:43.355069 containerd[1534]: 2025-07-15 23:55:43.311 [INFO][4145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.2/26] IPv6=[] ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" HandleID="k8s-pod-network.518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.356599 containerd[1534]: 2025-07-15 23:55:43.314 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0", GenerateName:"calico-kube-controllers-699f8bbf89-", Namespace:"calico-system", SelfLink:"", UID:"bce2aed8-84c2-485d-9f19-a7c86b1ca9ca", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699f8bbf89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"calico-kube-controllers-699f8bbf89-4pvf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ce69a8fde1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:43.356745 containerd[1534]: 2025-07-15 23:55:43.314 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.2/32] ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.356745 containerd[1534]: 2025-07-15 23:55:43.314 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ce69a8fde1 ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.356745 containerd[1534]: 2025-07-15 23:55:43.317 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.357052 containerd[1534]: 2025-07-15 23:55:43.317 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0", GenerateName:"calico-kube-controllers-699f8bbf89-", Namespace:"calico-system", SelfLink:"", UID:"bce2aed8-84c2-485d-9f19-a7c86b1ca9ca", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699f8bbf89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3", Pod:"calico-kube-controllers-699f8bbf89-4pvf7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ce69a8fde1", MAC:"02:f2:2a:a7:3d:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:43.357052 containerd[1534]: 2025-07-15 23:55:43.341 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" Namespace="calico-system" Pod="calico-kube-controllers-699f8bbf89-4pvf7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--kube--controllers--699f8bbf89--4pvf7-eth0" Jul 15 23:55:43.407359 containerd[1534]: time="2025-07-15T23:55:43.407252657Z" level=info msg="connecting to shim 518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3" address="unix:///run/containerd/s/2b1a467735fbc1c4981ad13b2d4e4901aee24e05b2ef491f3cad817b5c82a8fe" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:43.468612 systemd[1]: Started cri-containerd-518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3.scope - libcontainer container 518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3. Jul 15 23:55:43.585818 containerd[1534]: time="2025-07-15T23:55:43.585177729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699f8bbf89-4pvf7,Uid:bce2aed8-84c2-485d-9f19-a7c86b1ca9ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3\"" Jul 15 23:55:44.092239 containerd[1534]: time="2025-07-15T23:55:44.092162977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv2j7,Uid:94013cd9-0fca-45b1-8726-fe5dcdfdb4c8,Namespace:kube-system,Attempt:0,}" Jul 15 23:55:44.110997 containerd[1534]: time="2025-07-15T23:55:44.110628578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bzlwj,Uid:f4502152-60dc-474c-8c1a-6ab9eb29ce8c,Namespace:kube-system,Attempt:0,}" Jul 15 23:55:44.443141 systemd-networkd[1451]: calic62b258820f: Link UP Jul 15 23:55:44.443521 systemd-networkd[1451]: calic62b258820f: Gained carrier Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.218 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0 coredns-668d6bf9bc- kube-system 94013cd9-0fca-45b1-8726-fe5dcdfdb4c8 806 0 2025-07-15 23:55:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e coredns-668d6bf9bc-tv2j7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic62b258820f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.219 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.317 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" HandleID="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.319 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" HandleID="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000342940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"coredns-668d6bf9bc-tv2j7", "timestamp":"2025-07-15 23:55:44.317055297 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.319 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.319 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.320 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.341 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.352 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.366 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.372 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.380 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.380 [INFO][4271] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.386 [INFO][4271] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00 Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.397 [INFO][4271] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.418 [INFO][4271] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.3/26] block=192.168.59.0/26 handle="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.418 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.3/26] handle="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.420 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:44.495635 containerd[1534]: 2025-07-15 23:55:44.420 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.3/26] IPv6=[] ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" HandleID="k8s-pod-network.7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.499775 containerd[1534]: 2025-07-15 23:55:44.432 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94013cd9-0fca-45b1-8726-fe5dcdfdb4c8", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"coredns-668d6bf9bc-tv2j7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic62b258820f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:44.499775 containerd[1534]: 2025-07-15 23:55:44.433 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.3/32] ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.499775 containerd[1534]: 2025-07-15 23:55:44.433 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic62b258820f ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.499775 containerd[1534]: 2025-07-15 23:55:44.448 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.499775 containerd[1534]: 2025-07-15 23:55:44.453 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94013cd9-0fca-45b1-8726-fe5dcdfdb4c8", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00", Pod:"coredns-668d6bf9bc-tv2j7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic62b258820f", MAC:"8e:f0:a9:bc:19:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:44.499775 containerd[1534]: 2025-07-15 23:55:44.484 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv2j7" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--tv2j7-eth0" Jul 15 23:55:44.595490 systemd-networkd[1451]: calicdaf459d91a: Link UP Jul 15 23:55:44.605380 systemd-networkd[1451]: calicdaf459d91a: Gained carrier Jul 15 23:55:44.621653 containerd[1534]: time="2025-07-15T23:55:44.621587510Z" level=info msg="connecting to shim 7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00" address="unix:///run/containerd/s/38e80b101fe537c882740c1439957b124130936305dc5e00a09b0e3dea3fa534" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.294 [INFO][4256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0 coredns-668d6bf9bc- kube-system f4502152-60dc-474c-8c1a-6ab9eb29ce8c 808 0 2025-07-15 23:55:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e coredns-668d6bf9bc-bzlwj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicdaf459d91a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.295 [INFO][4256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.413 [INFO][4278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" HandleID="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.413 [INFO][4278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" HandleID="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003352f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"coredns-668d6bf9bc-bzlwj", "timestamp":"2025-07-15 23:55:44.413013513 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.413 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.419 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.419 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.445 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.459 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.490 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.501 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.512 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.513 [INFO][4278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.519 [INFO][4278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.530 [INFO][4278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.550 [INFO][4278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.4/26] block=192.168.59.0/26 handle="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.550 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.4/26] handle="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.551 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:44.650437 containerd[1534]: 2025-07-15 23:55:44.551 [INFO][4278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.4/26] IPv6=[] ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" HandleID="k8s-pod-network.50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.653150 containerd[1534]: 2025-07-15 23:55:44.561 [INFO][4256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f4502152-60dc-474c-8c1a-6ab9eb29ce8c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"coredns-668d6bf9bc-bzlwj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdaf459d91a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:44.653150 containerd[1534]: 2025-07-15 23:55:44.564 [INFO][4256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.4/32] ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.653150 containerd[1534]: 2025-07-15 23:55:44.564 [INFO][4256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdaf459d91a ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.653150 containerd[1534]: 2025-07-15 23:55:44.608 [INFO][4256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.653150 containerd[1534]: 2025-07-15 23:55:44.610 [INFO][4256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f4502152-60dc-474c-8c1a-6ab9eb29ce8c", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd", Pod:"coredns-668d6bf9bc-bzlwj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdaf459d91a", MAC:"46:ee:71:6a:61:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:44.653150 containerd[1534]: 2025-07-15 23:55:44.631 [INFO][4256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" Namespace="kube-system" Pod="coredns-668d6bf9bc-bzlwj" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-coredns--668d6bf9bc--bzlwj-eth0" Jul 15 23:55:44.744260 containerd[1534]: time="2025-07-15T23:55:44.742406232Z" level=info msg="connecting to shim 50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd" address="unix:///run/containerd/s/4cbae0d98a57bd6b1eb951b46cd95ae5a31ed2b4313cda0de7a1d7aeb9e707a3" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:44.748892 systemd[1]: Started cri-containerd-7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00.scope - libcontainer container 7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00. Jul 15 23:55:44.836583 systemd[1]: Started cri-containerd-50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd.scope - libcontainer container 50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd. Jul 15 23:55:44.936918 containerd[1534]: time="2025-07-15T23:55:44.936848581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv2j7,Uid:94013cd9-0fca-45b1-8726-fe5dcdfdb4c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00\"" Jul 15 23:55:44.948735 containerd[1534]: time="2025-07-15T23:55:44.948652098Z" level=info msg="CreateContainer within sandbox \"7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:55:44.966588 containerd[1534]: time="2025-07-15T23:55:44.966527516Z" level=info msg="Container c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:44.980839 containerd[1534]: time="2025-07-15T23:55:44.980679496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bzlwj,Uid:f4502152-60dc-474c-8c1a-6ab9eb29ce8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd\"" Jul 15 23:55:44.982896 containerd[1534]: time="2025-07-15T23:55:44.982818428Z" level=info msg="CreateContainer within sandbox \"7d4e48f02b6d16fc7e98b2ce32c1ea4694dfa6722c9319f0d4cb337c04bd7d00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898\"" Jul 15 23:55:44.987651 containerd[1534]: time="2025-07-15T23:55:44.985552815Z" level=info msg="StartContainer for \"c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898\"" Jul 15 23:55:44.991306 containerd[1534]: time="2025-07-15T23:55:44.991252835Z" level=info msg="CreateContainer within sandbox \"50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:55:44.993273 containerd[1534]: time="2025-07-15T23:55:44.993225141Z" level=info msg="connecting to shim c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898" address="unix:///run/containerd/s/38e80b101fe537c882740c1439957b124130936305dc5e00a09b0e3dea3fa534" protocol=ttrpc version=3 Jul 15 23:55:45.022641 containerd[1534]: time="2025-07-15T23:55:45.022485095Z" level=info msg="Container c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:45.038094 containerd[1534]: time="2025-07-15T23:55:45.037807793Z" level=info msg="CreateContainer within sandbox \"50c24a73cc9fca1e8937f39c4527fa85bfa59008c4ccac4f75172a23758dfbbd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7\"" Jul 15 23:55:45.039793 containerd[1534]: time="2025-07-15T23:55:45.039704280Z" level=info msg="StartContainer for \"c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7\"" Jul 15 23:55:45.043832 containerd[1534]: time="2025-07-15T23:55:45.043718846Z" level=info msg="connecting to shim c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7" address="unix:///run/containerd/s/4cbae0d98a57bd6b1eb951b46cd95ae5a31ed2b4313cda0de7a1d7aeb9e707a3" protocol=ttrpc version=3 Jul 15 23:55:45.056072 systemd[1]: Started cri-containerd-c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898.scope - libcontainer container c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898. Jul 15 23:55:45.093588 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Jul 15 23:55:45.094839 systemd[1]: Started cri-containerd-c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7.scope - libcontainer container c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7. Jul 15 23:55:45.155547 systemd-networkd[1451]: cali9ce69a8fde1: Gained IPv6LL Jul 15 23:55:45.194086 containerd[1534]: time="2025-07-15T23:55:45.193923709Z" level=info msg="StartContainer for \"c54bcef0d8e981d5f746cfc8e5e739b84b1320a75be644cf1d845812929ed898\" returns successfully" Jul 15 23:55:45.256721 containerd[1534]: time="2025-07-15T23:55:45.256663922Z" level=info msg="StartContainer for \"c979bbc43bc9f3242ead98d712b3b02d8b7b924cb6674d398732490648c1b7e7\" returns successfully" Jul 15 23:55:45.468609 kubelet[2733]: I0715 23:55:45.468436 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bzlwj" podStartSLOduration=42.468407176 podStartE2EDuration="42.468407176s" podCreationTimestamp="2025-07-15 23:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:55:45.43573348 +0000 UTC m=+48.556317735" watchObservedRunningTime="2025-07-15 23:55:45.468407176 +0000 UTC m=+48.588991430" Jul 15 23:55:45.810246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968066187.mount: Deactivated successfully. Jul 15 23:55:45.837261 containerd[1534]: time="2025-07-15T23:55:45.837160522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:45.838675 containerd[1534]: time="2025-07-15T23:55:45.838623802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 15 23:55:45.840417 containerd[1534]: time="2025-07-15T23:55:45.840329098Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:45.844409 containerd[1534]: time="2025-07-15T23:55:45.844352705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:45.845719 containerd[1534]: time="2025-07-15T23:55:45.845549805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.182410982s" Jul 15 23:55:45.845719 containerd[1534]: time="2025-07-15T23:55:45.845599101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 15 23:55:45.847466 containerd[1534]: time="2025-07-15T23:55:45.847422273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 15 23:55:45.849049 containerd[1534]: time="2025-07-15T23:55:45.848966041Z" level=info msg="CreateContainer within sandbox \"236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 15 23:55:45.862343 containerd[1534]: time="2025-07-15T23:55:45.862271636Z" level=info msg="Container a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:45.876556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227803235.mount: Deactivated successfully. Jul 15 23:55:45.885520 containerd[1534]: time="2025-07-15T23:55:45.885462164Z" level=info msg="CreateContainer within sandbox \"236061121bbd62c3345eab31bcd550029bf7c03edea4590d54dcfcd05552622c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55\"" Jul 15 23:55:45.886822 containerd[1534]: time="2025-07-15T23:55:45.886772078Z" level=info msg="StartContainer for \"a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55\"" Jul 15 23:55:45.888751 containerd[1534]: time="2025-07-15T23:55:45.888698631Z" level=info msg="connecting to shim a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55" address="unix:///run/containerd/s/9cacac9eb3e003f763ab7d39e8507b48e089ced04f430b9c9bfa77cb831a5c23" protocol=ttrpc version=3 Jul 15 23:55:45.930636 systemd[1]: Started cri-containerd-a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55.scope - libcontainer container a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55. Jul 15 23:55:45.987594 systemd-networkd[1451]: calicdaf459d91a: Gained IPv6LL Jul 15 23:55:46.023564 containerd[1534]: time="2025-07-15T23:55:46.023499435Z" level=info msg="StartContainer for \"a4dd1beab1fae0ac555fd300888aba96c618a8aeebceea5f8f88c718dd4a9d55\" returns successfully" Jul 15 23:55:46.093049 containerd[1534]: time="2025-07-15T23:55:46.092884794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-dj5r2,Uid:d172420e-cfb9-4d22-b2f9-122d65ad0fd7,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:55:46.243544 systemd-networkd[1451]: calic62b258820f: Gained IPv6LL Jul 15 23:55:46.337181 systemd-networkd[1451]: calia76c4bb6a91: Link UP Jul 15 23:55:46.342086 systemd-networkd[1451]: calia76c4bb6a91: Gained carrier Jul 15 23:55:46.371052 kubelet[2733]: I0715 23:55:46.370586 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tv2j7" podStartSLOduration=43.370556709 podStartE2EDuration="43.370556709s" podCreationTimestamp="2025-07-15 23:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:55:45.473949796 +0000 UTC m=+48.594534061" watchObservedRunningTime="2025-07-15 23:55:46.370556709 +0000 UTC m=+49.491140963" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.198 [INFO][4505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0 calico-apiserver-5ddbdf8fc- calico-apiserver d172420e-cfb9-4d22-b2f9-122d65ad0fd7 810 0 2025-07-15 23:55:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddbdf8fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e calico-apiserver-5ddbdf8fc-dj5r2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia76c4bb6a91 [] [] }} ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.198 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.265 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" HandleID="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.267 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" HandleID="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"calico-apiserver-5ddbdf8fc-dj5r2", "timestamp":"2025-07-15 23:55:46.265192482 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.267 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.267 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.267 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.278 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.287 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.295 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.299 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.304 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.304 [INFO][4519] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.308 [INFO][4519] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.313 [INFO][4519] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.324 [INFO][4519] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.5/26] block=192.168.59.0/26 handle="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.324 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.5/26] handle="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.325 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:46.377826 containerd[1534]: 2025-07-15 23:55:46.325 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.5/26] IPv6=[] ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" HandleID="k8s-pod-network.7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.381667 containerd[1534]: 2025-07-15 23:55:46.330 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0", GenerateName:"calico-apiserver-5ddbdf8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d172420e-cfb9-4d22-b2f9-122d65ad0fd7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddbdf8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"calico-apiserver-5ddbdf8fc-dj5r2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia76c4bb6a91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:46.381667 containerd[1534]: 2025-07-15 23:55:46.330 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.5/32] ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.381667 containerd[1534]: 2025-07-15 23:55:46.330 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia76c4bb6a91 ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.381667 containerd[1534]: 2025-07-15 23:55:46.341 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.381667 containerd[1534]: 2025-07-15 23:55:46.342 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0", GenerateName:"calico-apiserver-5ddbdf8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d172420e-cfb9-4d22-b2f9-122d65ad0fd7", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddbdf8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d", Pod:"calico-apiserver-5ddbdf8fc-dj5r2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia76c4bb6a91", MAC:"5e:42:94:b9:51:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:46.381667 containerd[1534]: 2025-07-15 23:55:46.372 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-dj5r2" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--dj5r2-eth0" Jul 15 23:55:46.447354 containerd[1534]: time="2025-07-15T23:55:46.446531700Z" level=info msg="connecting to shim 7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d" address="unix:///run/containerd/s/8708536b393d4fdcbc142b489cd544d4356313c5e6079cc25e02d4d140c1f9b2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:46.488845 kubelet[2733]: I0715 23:55:46.487401 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f9bd8b799-b7ntn" podStartSLOduration=1.7942462 podStartE2EDuration="6.486207226s" podCreationTimestamp="2025-07-15 23:55:40 +0000 UTC" firstStartedPulling="2025-07-15 23:55:41.154894201 +0000 UTC m=+44.275478447" lastFinishedPulling="2025-07-15 23:55:45.846855226 +0000 UTC m=+48.967439473" observedRunningTime="2025-07-15 23:55:46.478631519 +0000 UTC m=+49.599215772" watchObservedRunningTime="2025-07-15 23:55:46.486207226 +0000 UTC m=+49.606791479" Jul 15 23:55:46.529598 systemd[1]: Started cri-containerd-7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d.scope - libcontainer container 7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d. Jul 15 23:55:46.786756 containerd[1534]: time="2025-07-15T23:55:46.786189731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-dj5r2,Uid:d172420e-cfb9-4d22-b2f9-122d65ad0fd7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d\"" Jul 15 23:55:47.096052 containerd[1534]: time="2025-07-15T23:55:47.095759434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-5k246,Uid:faca8f11-9007-4d01-9d45-1928fcf59378,Namespace:calico-apiserver,Attempt:0,}" Jul 15 23:55:47.097824 containerd[1534]: time="2025-07-15T23:55:47.097650330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4nkh,Uid:780f5aa2-cfc2-4e2b-9f15-499cb6a49a94,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:47.717343 systemd-networkd[1451]: cali70d47dcd10d: Link UP Jul 15 23:55:47.731960 systemd-networkd[1451]: cali70d47dcd10d: Gained carrier Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.483 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0 calico-apiserver-5ddbdf8fc- calico-apiserver faca8f11-9007-4d01-9d45-1928fcf59378 807 0 2025-07-15 23:55:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddbdf8fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e calico-apiserver-5ddbdf8fc-5k246 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70d47dcd10d [] [] }} ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.483 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.608 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" HandleID="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.609 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" HandleID="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033f940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"calico-apiserver-5ddbdf8fc-5k246", "timestamp":"2025-07-15 23:55:47.608795356 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.609 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.609 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.609 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.622 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.634 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.646 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.652 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.659 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.659 [INFO][4610] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.663 [INFO][4610] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.673 [INFO][4610] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.695 [INFO][4610] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.6/26] block=192.168.59.0/26 handle="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.695 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.6/26] handle="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.695 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:47.784184 containerd[1534]: 2025-07-15 23:55:47.695 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.6/26] IPv6=[] ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" HandleID="k8s-pod-network.92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.786665 containerd[1534]: 2025-07-15 23:55:47.700 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0", GenerateName:"calico-apiserver-5ddbdf8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"faca8f11-9007-4d01-9d45-1928fcf59378", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddbdf8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"calico-apiserver-5ddbdf8fc-5k246", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70d47dcd10d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:47.786665 containerd[1534]: 2025-07-15 23:55:47.700 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.6/32] ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.786665 containerd[1534]: 2025-07-15 23:55:47.700 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70d47dcd10d ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.786665 containerd[1534]: 2025-07-15 23:55:47.735 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.786665 containerd[1534]: 2025-07-15 23:55:47.743 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0", GenerateName:"calico-apiserver-5ddbdf8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"faca8f11-9007-4d01-9d45-1928fcf59378", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddbdf8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf", Pod:"calico-apiserver-5ddbdf8fc-5k246", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70d47dcd10d", MAC:"a6:d5:65:bd:d7:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:47.786665 containerd[1534]: 2025-07-15 23:55:47.776 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" Namespace="calico-apiserver" Pod="calico-apiserver-5ddbdf8fc-5k246" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-calico--apiserver--5ddbdf8fc--5k246-eth0" Jul 15 23:55:47.890999 systemd-networkd[1451]: calie0ea19c575e: Link UP Jul 15 23:55:47.899472 systemd-networkd[1451]: calie0ea19c575e: Gained carrier Jul 15 23:55:47.922765 containerd[1534]: time="2025-07-15T23:55:47.922491189Z" level=info msg="connecting to shim 92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf" address="unix:///run/containerd/s/31b6267df115d576ba2850cf2ed7ae8aaf86d807a5793304c3ddcabf949eb23b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.476 [INFO][4587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0 csi-node-driver- calico-system 780f5aa2-cfc2-4e2b-9f15-499cb6a49a94 755 0 2025-07-15 23:55:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e csi-node-driver-p4nkh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie0ea19c575e [] [] }} ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.479 [INFO][4587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.635 [INFO][4608] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" HandleID="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.636 [INFO][4608] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" HandleID="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00022d460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"csi-node-driver-p4nkh", "timestamp":"2025-07-15 23:55:47.635629724 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.636 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.695 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.696 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.738 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.762 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.782 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.791 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.800 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.801 [INFO][4608] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.805 [INFO][4608] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81 Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.819 [INFO][4608] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.850 [INFO][4608] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.7/26] block=192.168.59.0/26 handle="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.850 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.7/26] handle="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.850 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:47.944505 containerd[1534]: 2025-07-15 23:55:47.850 [INFO][4608] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.7/26] IPv6=[] ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" HandleID="k8s-pod-network.cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:47.947186 containerd[1534]: 2025-07-15 23:55:47.856 [INFO][4587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"csi-node-driver-p4nkh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0ea19c575e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:47.947186 containerd[1534]: 2025-07-15 23:55:47.857 [INFO][4587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.7/32] ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:47.947186 containerd[1534]: 2025-07-15 23:55:47.858 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0ea19c575e ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:47.947186 containerd[1534]: 2025-07-15 23:55:47.901 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:47.947186 containerd[1534]: 2025-07-15 23:55:47.903 [INFO][4587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"780f5aa2-cfc2-4e2b-9f15-499cb6a49a94", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81", Pod:"csi-node-driver-p4nkh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0ea19c575e", MAC:"fe:4c:06:f1:3d:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:47.947186 containerd[1534]: 2025-07-15 23:55:47.932 [INFO][4587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" Namespace="calico-system" Pod="csi-node-driver-p4nkh" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-csi--node--driver--p4nkh-eth0" Jul 15 23:55:48.017984 systemd[1]: Started cri-containerd-92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf.scope - libcontainer container 92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf. Jul 15 23:55:48.101029 containerd[1534]: time="2025-07-15T23:55:48.100545876Z" level=info msg="connecting to shim cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81" address="unix:///run/containerd/s/d8bf8e3ac7524e25195b8926d297dbbe58a83be83f818e8837f1ae419553e464" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:48.179117 systemd[1]: Started cri-containerd-cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81.scope - libcontainer container cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81. Jul 15 23:55:48.271157 containerd[1534]: time="2025-07-15T23:55:48.270876527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddbdf8fc-5k246,Uid:faca8f11-9007-4d01-9d45-1928fcf59378,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf\"" Jul 15 23:55:48.335073 containerd[1534]: time="2025-07-15T23:55:48.334912723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p4nkh,Uid:780f5aa2-cfc2-4e2b-9f15-499cb6a49a94,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81\"" Jul 15 23:55:48.355654 systemd-networkd[1451]: calia76c4bb6a91: Gained IPv6LL Jul 15 23:55:48.868630 systemd-networkd[1451]: cali70d47dcd10d: Gained IPv6LL Jul 15 23:55:49.097051 containerd[1534]: time="2025-07-15T23:55:49.096718868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vbx59,Uid:7c4d384c-6618-4b09-b899-3709c4eb3554,Namespace:calico-system,Attempt:0,}" Jul 15 23:55:49.315731 systemd-networkd[1451]: calie0ea19c575e: Gained IPv6LL Jul 15 23:55:49.412047 systemd-networkd[1451]: cali9bc73ec488f: Link UP Jul 15 23:55:49.413358 systemd-networkd[1451]: cali9bc73ec488f: Gained carrier Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.212 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0 goldmane-768f4c5c69- calico-system 7c4d384c-6618-4b09-b899-3709c4eb3554 812 0 2025-07-15 23:55:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e goldmane-768f4c5c69-vbx59 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9bc73ec488f [] [] }} ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.213 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.299 [INFO][4755] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" HandleID="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.299 [INFO][4755] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" HandleID="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", "pod":"goldmane-768f4c5c69-vbx59", "timestamp":"2025-07-15 23:55:49.299031746 +0000 UTC"}, Hostname:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.300 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.300 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.300 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e' Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.318 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.330 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.341 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.346 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.353 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.353 [INFO][4755] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.358 [INFO][4755] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331 Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.370 [INFO][4755] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.388 [INFO][4755] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.8/26] block=192.168.59.0/26 handle="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.388 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.8/26] handle="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" host="ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e" Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.389 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 15 23:55:49.460852 containerd[1534]: 2025-07-15 23:55:49.389 [INFO][4755] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.8/26] IPv6=[] ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" HandleID="k8s-pod-network.b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Workload="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.466066 containerd[1534]: 2025-07-15 23:55:49.397 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7c4d384c-6618-4b09-b899-3709c4eb3554", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"", Pod:"goldmane-768f4c5c69-vbx59", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bc73ec488f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:49.466066 containerd[1534]: 2025-07-15 23:55:49.399 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.8/32] ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.466066 containerd[1534]: 2025-07-15 23:55:49.399 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bc73ec488f ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.466066 containerd[1534]: 2025-07-15 23:55:49.412 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.466066 containerd[1534]: 2025-07-15 23:55:49.416 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7c4d384c-6618-4b09-b899-3709c4eb3554", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 15, 23, 55, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372-0-1-nightly-20250715-2100-dcb159b2828e22e37b2e", ContainerID:"b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331", Pod:"goldmane-768f4c5c69-vbx59", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bc73ec488f", MAC:"16:d8:83:1d:6f:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 15 23:55:49.466066 containerd[1534]: 2025-07-15 23:55:49.453 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" Namespace="calico-system" Pod="goldmane-768f4c5c69-vbx59" WorkloadEndpoint="ci--4372--0--1--nightly--20250715--2100--dcb159b2828e22e37b2e-k8s-goldmane--768f4c5c69--vbx59-eth0" Jul 15 23:55:49.520119 containerd[1534]: time="2025-07-15T23:55:49.520062514Z" level=info msg="connecting to shim b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331" address="unix:///run/containerd/s/bea11774eb1e8ea6d20b7073686ae619cc11ebf7fd64c5651dd277977a3d0f27" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:49.598631 systemd[1]: Started cri-containerd-b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331.scope - libcontainer container b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331. Jul 15 23:55:49.737564 containerd[1534]: time="2025-07-15T23:55:49.737343958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-vbx59,Uid:7c4d384c-6618-4b09-b899-3709c4eb3554,Namespace:calico-system,Attempt:0,} returns sandbox id \"b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331\"" Jul 15 23:55:50.042199 containerd[1534]: time="2025-07-15T23:55:50.042044035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:50.044742 containerd[1534]: time="2025-07-15T23:55:50.044676974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 15 23:55:50.046354 containerd[1534]: time="2025-07-15T23:55:50.046295107Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:50.051994 containerd[1534]: time="2025-07-15T23:55:50.051940596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:50.054543 containerd[1534]: time="2025-07-15T23:55:50.054487914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.20700992s" Jul 15 23:55:50.054543 containerd[1534]: time="2025-07-15T23:55:50.054545234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 15 23:55:50.057934 containerd[1534]: time="2025-07-15T23:55:50.057877421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:55:50.080801 containerd[1534]: time="2025-07-15T23:55:50.080747800Z" level=info msg="CreateContainer within sandbox \"518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 15 23:55:50.126334 containerd[1534]: time="2025-07-15T23:55:50.126253542Z" level=info msg="Container 9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:50.143953 containerd[1534]: time="2025-07-15T23:55:50.143793182Z" level=info msg="CreateContainer within sandbox \"518e1054b7077a029d85d429d37013303c02abbfc2933ed6a75b05d8ff760aa3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\"" Jul 15 23:55:50.145333 containerd[1534]: time="2025-07-15T23:55:50.145216073Z" level=info msg="StartContainer for \"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\"" Jul 15 23:55:50.147728 containerd[1534]: time="2025-07-15T23:55:50.147676950Z" level=info msg="connecting to shim 9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80" address="unix:///run/containerd/s/2b1a467735fbc1c4981ad13b2d4e4901aee24e05b2ef491f3cad817b5c82a8fe" protocol=ttrpc version=3 Jul 15 23:55:50.186000 systemd[1]: Started cri-containerd-9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80.scope - libcontainer container 9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80. Jul 15 23:55:50.291858 containerd[1534]: time="2025-07-15T23:55:50.291660071Z" level=info msg="StartContainer for \"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\" returns successfully" Jul 15 23:55:50.511448 kubelet[2733]: I0715 23:55:50.511215 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-699f8bbf89-4pvf7" podStartSLOduration=24.042672869 podStartE2EDuration="30.510765266s" podCreationTimestamp="2025-07-15 23:55:20 +0000 UTC" firstStartedPulling="2025-07-15 23:55:43.589092026 +0000 UTC m=+46.709676259" lastFinishedPulling="2025-07-15 23:55:50.057184414 +0000 UTC m=+53.177768656" observedRunningTime="2025-07-15 23:55:50.507595949 +0000 UTC m=+53.628180201" watchObservedRunningTime="2025-07-15 23:55:50.510765266 +0000 UTC m=+53.631349520" Jul 15 23:55:51.299734 systemd-networkd[1451]: cali9bc73ec488f: Gained IPv6LL Jul 15 23:55:51.732103 containerd[1534]: time="2025-07-15T23:55:51.732052407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\" id:\"34fd745ae2760652652230feb2aa8b908d2377e3927b47f237e7426e16d06072\" pid:4877 exited_at:{seconds:1752623751 nanos:730404266}" Jul 15 23:55:53.086137 containerd[1534]: time="2025-07-15T23:55:53.086055809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:53.087620 containerd[1534]: time="2025-07-15T23:55:53.087552394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 15 23:55:53.089642 containerd[1534]: time="2025-07-15T23:55:53.089561792Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:53.094302 containerd[1534]: time="2025-07-15T23:55:53.094108987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:53.095153 containerd[1534]: time="2025-07-15T23:55:53.095085104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.037160581s" Jul 15 23:55:53.095153 containerd[1534]: time="2025-07-15T23:55:53.095128070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:55:53.097542 containerd[1534]: time="2025-07-15T23:55:53.097483753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 15 23:55:53.099664 containerd[1534]: time="2025-07-15T23:55:53.099608313Z" level=info msg="CreateContainer within sandbox \"7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:55:53.114767 containerd[1534]: time="2025-07-15T23:55:53.114703439Z" level=info msg="Container d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:53.137882 containerd[1534]: time="2025-07-15T23:55:53.137814117Z" level=info msg="CreateContainer within sandbox \"7b9e56da9f307b5e3b2de81ecd44051bdbee7ca5c4325b48a78455eac017ea3d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24\"" Jul 15 23:55:53.138810 containerd[1534]: time="2025-07-15T23:55:53.138671906Z" level=info msg="StartContainer for \"d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24\"" Jul 15 23:55:53.141059 containerd[1534]: time="2025-07-15T23:55:53.141008905Z" level=info msg="connecting to shim d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24" address="unix:///run/containerd/s/8708536b393d4fdcbc142b489cd544d4356313c5e6079cc25e02d4d140c1f9b2" protocol=ttrpc version=3 Jul 15 23:55:53.179679 systemd[1]: Started cri-containerd-d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24.scope - libcontainer container d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24. Jul 15 23:55:53.263570 containerd[1534]: time="2025-07-15T23:55:53.263499670Z" level=info msg="StartContainer for \"d55373c728c7340fe6d92e2a465a09dbd3c9bb5d8e8a48627553468bb5616b24\" returns successfully" Jul 15 23:55:53.313278 containerd[1534]: time="2025-07-15T23:55:53.313215829Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:53.316337 containerd[1534]: time="2025-07-15T23:55:53.315333094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 15 23:55:53.318942 containerd[1534]: time="2025-07-15T23:55:53.318889992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 221.353149ms" Jul 15 23:55:53.318942 containerd[1534]: time="2025-07-15T23:55:53.318944472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 15 23:55:53.322208 containerd[1534]: time="2025-07-15T23:55:53.321822055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 15 23:55:53.326240 containerd[1534]: time="2025-07-15T23:55:53.326114279Z" level=info msg="CreateContainer within sandbox \"92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 15 23:55:53.343435 containerd[1534]: time="2025-07-15T23:55:53.341607241Z" level=info msg="Container b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:53.360663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553082455.mount: Deactivated successfully. Jul 15 23:55:53.370547 containerd[1534]: time="2025-07-15T23:55:53.370490420Z" level=info msg="CreateContainer within sandbox \"92aeb9f95e2464d28b570c1f5b9f4ac52af8a8fd14a61b8051f6a443d0a242bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9\"" Jul 15 23:55:53.373112 containerd[1534]: time="2025-07-15T23:55:53.373065943Z" level=info msg="StartContainer for \"b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9\"" Jul 15 23:55:53.375477 containerd[1534]: time="2025-07-15T23:55:53.375430500Z" level=info msg="connecting to shim b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9" address="unix:///run/containerd/s/31b6267df115d576ba2850cf2ed7ae8aaf86d807a5793304c3ddcabf949eb23b" protocol=ttrpc version=3 Jul 15 23:55:53.412619 systemd[1]: Started cri-containerd-b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9.scope - libcontainer container b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9. Jul 15 23:55:53.519777 containerd[1534]: time="2025-07-15T23:55:53.518510802Z" level=info msg="StartContainer for \"b65998b9cf13c1a07d2cacd1330fbdd4cbedee8424da514d9ee7dd6983d9b4a9\" returns successfully" Jul 15 23:55:53.888244 ntpd[1507]: Listen normally on 6 vxlan.calico 192.168.59.0:123 Jul 15 23:55:53.888862 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 6 vxlan.calico 192.168.59.0:123 Jul 15 23:55:53.889038 ntpd[1507]: Listen normally on 7 calie487bf02349 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 7 calie487bf02349 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 8 vxlan.calico [fe80::64be:7eff:fe65:91e2%5]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 9 cali9ce69a8fde1 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 10 calic62b258820f [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 11 calicdaf459d91a [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 12 calia76c4bb6a91 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 13 cali70d47dcd10d [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 14 calie0ea19c575e [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:55:53.889822 ntpd[1507]: 15 Jul 23:55:53 ntpd[1507]: Listen normally on 15 cali9bc73ec488f [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:55:53.889136 ntpd[1507]: Listen normally on 8 vxlan.calico [fe80::64be:7eff:fe65:91e2%5]:123 Jul 15 23:55:53.889196 ntpd[1507]: Listen normally on 9 cali9ce69a8fde1 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 15 23:55:53.889255 ntpd[1507]: Listen normally on 10 calic62b258820f [fe80::ecee:eeff:feee:eeee%9]:123 Jul 15 23:55:53.889329 ntpd[1507]: Listen normally on 11 calicdaf459d91a [fe80::ecee:eeff:feee:eeee%10]:123 Jul 15 23:55:53.889398 ntpd[1507]: Listen normally on 12 calia76c4bb6a91 [fe80::ecee:eeff:feee:eeee%11]:123 Jul 15 23:55:53.889458 ntpd[1507]: Listen normally on 13 cali70d47dcd10d [fe80::ecee:eeff:feee:eeee%12]:123 Jul 15 23:55:53.889511 ntpd[1507]: Listen normally on 14 calie0ea19c575e [fe80::ecee:eeff:feee:eeee%13]:123 Jul 15 23:55:53.889561 ntpd[1507]: Listen normally on 15 cali9bc73ec488f [fe80::ecee:eeff:feee:eeee%14]:123 Jul 15 23:55:54.569675 kubelet[2733]: I0715 23:55:54.569379 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-dj5r2" podStartSLOduration=33.260547304 podStartE2EDuration="39.569346797s" podCreationTimestamp="2025-07-15 23:55:15 +0000 UTC" firstStartedPulling="2025-07-15 23:55:46.787823539 +0000 UTC m=+49.908407773" lastFinishedPulling="2025-07-15 23:55:53.096623015 +0000 UTC m=+56.217207266" observedRunningTime="2025-07-15 23:55:53.543714503 +0000 UTC m=+56.664298756" watchObservedRunningTime="2025-07-15 23:55:54.569346797 +0000 UTC m=+57.689931044" Jul 15 23:55:54.613361 containerd[1534]: time="2025-07-15T23:55:54.611880595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:54.614942 containerd[1534]: time="2025-07-15T23:55:54.614853254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 15 23:55:54.616567 containerd[1534]: time="2025-07-15T23:55:54.616486074Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:54.619932 containerd[1534]: time="2025-07-15T23:55:54.619858502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:54.623256 containerd[1534]: time="2025-07-15T23:55:54.622329534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.300445701s" Jul 15 23:55:54.623256 containerd[1534]: time="2025-07-15T23:55:54.622387812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 15 23:55:54.626410 containerd[1534]: time="2025-07-15T23:55:54.626044800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 15 23:55:54.631353 containerd[1534]: time="2025-07-15T23:55:54.630824474Z" level=info msg="CreateContainer within sandbox \"cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 15 23:55:54.651756 containerd[1534]: time="2025-07-15T23:55:54.651694963Z" level=info msg="Container e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:54.675757 containerd[1534]: time="2025-07-15T23:55:54.675696834Z" level=info msg="CreateContainer within sandbox \"cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6\"" Jul 15 23:55:54.677350 containerd[1534]: time="2025-07-15T23:55:54.677117714Z" level=info msg="StartContainer for \"e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6\"" Jul 15 23:55:54.680638 containerd[1534]: time="2025-07-15T23:55:54.680540343Z" level=info msg="connecting to shim e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6" address="unix:///run/containerd/s/d8bf8e3ac7524e25195b8926d297dbbe58a83be83f818e8837f1ae419553e464" protocol=ttrpc version=3 Jul 15 23:55:54.754595 systemd[1]: Started cri-containerd-e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6.scope - libcontainer container e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6. Jul 15 23:55:54.915491 containerd[1534]: time="2025-07-15T23:55:54.915427637Z" level=info msg="StartContainer for \"e6ef6c227793cc0540e68fa31e76cacbddba6be3c26661609bafe086c5ca7ed6\" returns successfully" Jul 15 23:55:55.637371 kubelet[2733]: I0715 23:55:55.635467 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:55:55.637371 kubelet[2733]: I0715 23:55:55.636539 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:55:57.099844 kubelet[2733]: I0715 23:55:57.097364 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddbdf8fc-5k246" podStartSLOduration=37.054984904 podStartE2EDuration="42.096176277s" podCreationTimestamp="2025-07-15 23:55:15 +0000 UTC" firstStartedPulling="2025-07-15 23:55:48.279500059 +0000 UTC m=+51.400084296" lastFinishedPulling="2025-07-15 23:55:53.320691419 +0000 UTC m=+56.441275669" observedRunningTime="2025-07-15 23:55:54.571749303 +0000 UTC m=+57.692333553" watchObservedRunningTime="2025-07-15 23:55:57.096176277 +0000 UTC m=+60.216760526" Jul 15 23:55:57.677851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308334128.mount: Deactivated successfully. Jul 15 23:55:59.103378 containerd[1534]: time="2025-07-15T23:55:59.103306547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:59.106034 containerd[1534]: time="2025-07-15T23:55:59.105947476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 15 23:55:59.108121 containerd[1534]: time="2025-07-15T23:55:59.108072277Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:59.114660 containerd[1534]: time="2025-07-15T23:55:59.114572535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:55:59.116103 containerd[1534]: time="2025-07-15T23:55:59.115595376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.487412735s" Jul 15 23:55:59.116103 containerd[1534]: time="2025-07-15T23:55:59.115649910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 15 23:55:59.119459 containerd[1534]: time="2025-07-15T23:55:59.119014426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 15 23:55:59.123677 containerd[1534]: time="2025-07-15T23:55:59.123404336Z" level=info msg="CreateContainer within sandbox \"b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 15 23:55:59.139212 containerd[1534]: time="2025-07-15T23:55:59.139137529Z" level=info msg="Container b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:59.160035 containerd[1534]: time="2025-07-15T23:55:59.159851274Z" level=info msg="CreateContainer within sandbox \"b65606f1aba3754d102db01ca20b8aabed5461de84d3fb2035274349ebc18331\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\"" Jul 15 23:55:59.161230 containerd[1534]: time="2025-07-15T23:55:59.161147505Z" level=info msg="StartContainer for \"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\"" Jul 15 23:55:59.164288 containerd[1534]: time="2025-07-15T23:55:59.164229299Z" level=info msg="connecting to shim b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71" address="unix:///run/containerd/s/bea11774eb1e8ea6d20b7073686ae619cc11ebf7fd64c5651dd277977a3d0f27" protocol=ttrpc version=3 Jul 15 23:55:59.216722 systemd[1]: Started cri-containerd-b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71.scope - libcontainer container b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71. Jul 15 23:55:59.348972 containerd[1534]: time="2025-07-15T23:55:59.348908318Z" level=info msg="StartContainer for \"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" returns successfully" Jul 15 23:55:59.820703 containerd[1534]: time="2025-07-15T23:55:59.820647248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" id:\"bd91c97a43cbbfad94cfb3c979063db3dde1f66598947947d6cad8df97cc3416\" pid:5068 exit_status:1 exited_at:{seconds:1752623759 nanos:820178636}" Jul 15 23:56:00.492244 containerd[1534]: time="2025-07-15T23:56:00.492156496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:00.493668 containerd[1534]: time="2025-07-15T23:56:00.493616055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 15 23:56:00.496305 containerd[1534]: time="2025-07-15T23:56:00.496253461Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:00.499476 containerd[1534]: time="2025-07-15T23:56:00.499369738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:56:00.500497 containerd[1534]: time="2025-07-15T23:56:00.500399248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.381338023s" Jul 15 23:56:00.500497 containerd[1534]: time="2025-07-15T23:56:00.500469713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 15 23:56:00.506267 containerd[1534]: time="2025-07-15T23:56:00.506195383Z" level=info msg="CreateContainer within sandbox \"cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 15 23:56:00.520357 containerd[1534]: time="2025-07-15T23:56:00.519617628Z" level=info msg="Container 228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:56:00.533641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646318780.mount: Deactivated successfully. Jul 15 23:56:00.540500 containerd[1534]: time="2025-07-15T23:56:00.540437415Z" level=info msg="CreateContainer within sandbox \"cc0af99eb9e94a9d37c5543d9a2821bfa9047dd7bc3b45b0d78ce052e228ad81\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93\"" Jul 15 23:56:00.541674 containerd[1534]: time="2025-07-15T23:56:00.541534591Z" level=info msg="StartContainer for \"228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93\"" Jul 15 23:56:00.545411 containerd[1534]: time="2025-07-15T23:56:00.545360017Z" level=info msg="connecting to shim 228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93" address="unix:///run/containerd/s/d8bf8e3ac7524e25195b8926d297dbbe58a83be83f818e8837f1ae419553e464" protocol=ttrpc version=3 Jul 15 23:56:00.586805 systemd[1]: Started cri-containerd-228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93.scope - libcontainer container 228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93. Jul 15 23:56:00.659132 containerd[1534]: time="2025-07-15T23:56:00.659050406Z" level=info msg="StartContainer for \"228842ac96139ba9b3b078d2e3c80de4c771b00baa2ed92370530d928a103a93\" returns successfully" Jul 15 23:56:00.704672 kubelet[2733]: I0715 23:56:00.704451 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-vbx59" podStartSLOduration=31.325663321 podStartE2EDuration="40.704279032s" podCreationTimestamp="2025-07-15 23:55:20 +0000 UTC" firstStartedPulling="2025-07-15 23:55:49.739892483 +0000 UTC m=+52.860476723" lastFinishedPulling="2025-07-15 23:55:59.118508184 +0000 UTC m=+62.239092434" observedRunningTime="2025-07-15 23:55:59.690654666 +0000 UTC m=+62.811238924" watchObservedRunningTime="2025-07-15 23:56:00.704279032 +0000 UTC m=+63.824863285" Jul 15 23:56:00.705428 kubelet[2733]: I0715 23:56:00.705196 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-p4nkh" podStartSLOduration=28.54094406 podStartE2EDuration="40.705177422s" podCreationTimestamp="2025-07-15 23:55:20 +0000 UTC" firstStartedPulling="2025-07-15 23:55:48.338059503 +0000 UTC m=+51.458643739" lastFinishedPulling="2025-07-15 23:56:00.502292858 +0000 UTC m=+63.622877101" observedRunningTime="2025-07-15 23:56:00.699748234 +0000 UTC m=+63.820332489" watchObservedRunningTime="2025-07-15 23:56:00.705177422 +0000 UTC m=+63.825761675" Jul 15 23:56:00.808385 containerd[1534]: time="2025-07-15T23:56:00.807252869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" id:\"8987bcdd1469801387ff3aea557ff6ea6a18ee1c9975f48836820071c50ad3bf\" pid:5126 exited_at:{seconds:1752623760 nanos:806536884}" Jul 15 23:56:01.239208 kubelet[2733]: I0715 23:56:01.238670 2733 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 15 23:56:01.239208 kubelet[2733]: I0715 23:56:01.238717 2733 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 15 23:56:02.074285 containerd[1534]: time="2025-07-15T23:56:02.074222902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" id:\"089c74028009a261be8ab78368d62f648dd901f071d12e7f30e07a8a4cd36c3c\" pid:5152 exited_at:{seconds:1752623762 nanos:73625360}" Jul 15 23:56:05.967081 systemd[1]: Started sshd@7-10.128.0.4:22-139.178.89.65:57060.service - OpenSSH per-connection server daemon (139.178.89.65:57060). Jul 15 23:56:06.281716 sshd[5176]: Accepted publickey for core from 139.178.89.65 port 57060 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:06.284040 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:06.292500 systemd-logind[1512]: New session 8 of user core. Jul 15 23:56:06.296761 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:56:06.607930 sshd[5178]: Connection closed by 139.178.89.65 port 57060 Jul 15 23:56:06.608878 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:06.620922 systemd-logind[1512]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:56:06.622273 systemd[1]: sshd@7-10.128.0.4:22-139.178.89.65:57060.service: Deactivated successfully. Jul 15 23:56:06.627709 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:56:06.632217 systemd-logind[1512]: Removed session 8. Jul 15 23:56:07.048999 containerd[1534]: time="2025-07-15T23:56:07.048787121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\" id:\"d38d83c733e09d420699f145bb54542283ff3d2c2c421cfa27110f4073f5d499\" pid:5202 exited_at:{seconds:1752623767 nanos:47851269}" Jul 15 23:56:08.442163 kubelet[2733]: I0715 23:56:08.441990 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:56:11.524819 containerd[1534]: time="2025-07-15T23:56:11.524764815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\" id:\"5569bb221c371ea2382ac0de704f70204bac4dcfef0fa316bb675cdf21a88ed9\" pid:5228 exited_at:{seconds:1752623771 nanos:524435377}" Jul 15 23:56:11.669630 systemd[1]: Started sshd@8-10.128.0.4:22-139.178.89.65:52634.service - OpenSSH per-connection server daemon (139.178.89.65:52634). Jul 15 23:56:11.990946 sshd[5240]: Accepted publickey for core from 139.178.89.65 port 52634 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:11.993013 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:12.001041 systemd-logind[1512]: New session 9 of user core. Jul 15 23:56:12.009642 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:56:12.411717 sshd[5242]: Connection closed by 139.178.89.65 port 52634 Jul 15 23:56:12.414838 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:12.424740 systemd[1]: sshd@8-10.128.0.4:22-139.178.89.65:52634.service: Deactivated successfully. Jul 15 23:56:12.427663 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:56:12.429080 systemd-logind[1512]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:56:12.432302 systemd-logind[1512]: Removed session 9. Jul 15 23:56:17.468123 systemd[1]: Started sshd@9-10.128.0.4:22-139.178.89.65:52644.service - OpenSSH per-connection server daemon (139.178.89.65:52644). Jul 15 23:56:17.782122 sshd[5256]: Accepted publickey for core from 139.178.89.65 port 52644 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:17.784614 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:17.796389 systemd-logind[1512]: New session 10 of user core. Jul 15 23:56:17.801754 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:56:18.136416 sshd[5258]: Connection closed by 139.178.89.65 port 52644 Jul 15 23:56:18.137534 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:18.144417 systemd[1]: sshd@9-10.128.0.4:22-139.178.89.65:52644.service: Deactivated successfully. Jul 15 23:56:18.148095 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:56:18.150386 systemd-logind[1512]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:56:18.153260 systemd-logind[1512]: Removed session 10. Jul 15 23:56:18.194696 systemd[1]: Started sshd@10-10.128.0.4:22-139.178.89.65:52658.service - OpenSSH per-connection server daemon (139.178.89.65:52658). Jul 15 23:56:18.505087 sshd[5271]: Accepted publickey for core from 139.178.89.65 port 52658 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:18.507932 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:18.516650 systemd-logind[1512]: New session 11 of user core. Jul 15 23:56:18.523775 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:56:18.894047 sshd[5273]: Connection closed by 139.178.89.65 port 52658 Jul 15 23:56:18.892812 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:18.903092 systemd[1]: sshd@10-10.128.0.4:22-139.178.89.65:52658.service: Deactivated successfully. Jul 15 23:56:18.907672 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:56:18.913101 systemd-logind[1512]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:56:18.917217 systemd-logind[1512]: Removed session 11. Jul 15 23:56:18.945890 systemd[1]: Started sshd@11-10.128.0.4:22-139.178.89.65:47822.service - OpenSSH per-connection server daemon (139.178.89.65:47822). Jul 15 23:56:19.257070 sshd[5283]: Accepted publickey for core from 139.178.89.65 port 47822 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:19.259216 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:19.266821 systemd-logind[1512]: New session 12 of user core. Jul 15 23:56:19.277676 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:56:19.633213 sshd[5285]: Connection closed by 139.178.89.65 port 47822 Jul 15 23:56:19.634185 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:19.640655 systemd[1]: sshd@11-10.128.0.4:22-139.178.89.65:47822.service: Deactivated successfully. Jul 15 23:56:19.644541 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:56:19.647945 systemd-logind[1512]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:56:19.652531 systemd-logind[1512]: Removed session 12. Jul 15 23:56:21.583336 containerd[1534]: time="2025-07-15T23:56:21.583249725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\" id:\"2d4ebf4a512e49b4e26f9df4f54a0f121994573b67962b17e7e60f397a792ab0\" pid:5307 exited_at:{seconds:1752623781 nanos:582468586}" Jul 15 23:56:24.694402 systemd[1]: Started sshd@12-10.128.0.4:22-139.178.89.65:47828.service - OpenSSH per-connection server daemon (139.178.89.65:47828). Jul 15 23:56:25.009206 sshd[5327]: Accepted publickey for core from 139.178.89.65 port 47828 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:25.011230 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:25.019270 systemd-logind[1512]: New session 13 of user core. Jul 15 23:56:25.028722 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:56:25.325125 sshd[5329]: Connection closed by 139.178.89.65 port 47828 Jul 15 23:56:25.326173 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:25.332980 systemd[1]: sshd@12-10.128.0.4:22-139.178.89.65:47828.service: Deactivated successfully. Jul 15 23:56:25.337069 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:56:25.339396 systemd-logind[1512]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:56:25.341850 systemd-logind[1512]: Removed session 13. Jul 15 23:56:30.386545 systemd[1]: Started sshd@13-10.128.0.4:22-139.178.89.65:35506.service - OpenSSH per-connection server daemon (139.178.89.65:35506). Jul 15 23:56:30.697594 sshd[5343]: Accepted publickey for core from 139.178.89.65 port 35506 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:30.701176 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:30.711832 systemd-logind[1512]: New session 14 of user core. Jul 15 23:56:30.717364 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:56:30.830949 containerd[1534]: time="2025-07-15T23:56:30.830849199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" id:\"237e49b63344f960ed0fcb2e65b4e3f9cce0af16ff70b76af0b7c68157c2d139\" pid:5356 exited_at:{seconds:1752623790 nanos:830455454}" Jul 15 23:56:31.030349 sshd[5363]: Connection closed by 139.178.89.65 port 35506 Jul 15 23:56:31.031798 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:31.038128 systemd[1]: sshd@13-10.128.0.4:22-139.178.89.65:35506.service: Deactivated successfully. Jul 15 23:56:31.042168 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:56:31.043837 systemd-logind[1512]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:56:31.046195 systemd-logind[1512]: Removed session 14. Jul 15 23:56:36.086404 systemd[1]: Started sshd@14-10.128.0.4:22-139.178.89.65:35522.service - OpenSSH per-connection server daemon (139.178.89.65:35522). Jul 15 23:56:36.393295 sshd[5381]: Accepted publickey for core from 139.178.89.65 port 35522 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:36.395452 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:36.403142 systemd-logind[1512]: New session 15 of user core. Jul 15 23:56:36.410510 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:56:36.725279 sshd[5383]: Connection closed by 139.178.89.65 port 35522 Jul 15 23:56:36.725812 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:36.732365 systemd[1]: sshd@14-10.128.0.4:22-139.178.89.65:35522.service: Deactivated successfully. Jul 15 23:56:36.735711 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:56:36.738384 systemd-logind[1512]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:56:36.740379 systemd-logind[1512]: Removed session 15. Jul 15 23:56:41.457064 containerd[1534]: time="2025-07-15T23:56:41.456928674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f37d96de717a7a2461fec7016b23e2dccba6e2028fb2746081e2e44c1f7aa36\" id:\"72c93cd93065a9e97efdc5a4c5bde90a91b3dde1b201c6f54fad0b0ffe1ac163\" pid:5408 exited_at:{seconds:1752623801 nanos:456549192}" Jul 15 23:56:41.778031 systemd[1]: Started sshd@15-10.128.0.4:22-139.178.89.65:37588.service - OpenSSH per-connection server daemon (139.178.89.65:37588). Jul 15 23:56:42.087104 sshd[5420]: Accepted publickey for core from 139.178.89.65 port 37588 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:42.089655 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:42.098298 systemd-logind[1512]: New session 16 of user core. Jul 15 23:56:42.105599 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:56:42.384060 sshd[5422]: Connection closed by 139.178.89.65 port 37588 Jul 15 23:56:42.385723 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:42.391814 systemd[1]: sshd@15-10.128.0.4:22-139.178.89.65:37588.service: Deactivated successfully. Jul 15 23:56:42.395917 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:56:42.398188 systemd-logind[1512]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:56:42.401563 systemd-logind[1512]: Removed session 16. Jul 15 23:56:42.438190 systemd[1]: Started sshd@16-10.128.0.4:22-139.178.89.65:37594.service - OpenSSH per-connection server daemon (139.178.89.65:37594). Jul 15 23:56:42.745494 sshd[5434]: Accepted publickey for core from 139.178.89.65 port 37594 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:42.747184 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:42.755352 systemd-logind[1512]: New session 17 of user core. Jul 15 23:56:42.760571 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:56:43.118551 sshd[5436]: Connection closed by 139.178.89.65 port 37594 Jul 15 23:56:43.119545 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:43.126078 systemd[1]: sshd@16-10.128.0.4:22-139.178.89.65:37594.service: Deactivated successfully. Jul 15 23:56:43.130633 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:56:43.132417 systemd-logind[1512]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:56:43.135165 systemd-logind[1512]: Removed session 17. Jul 15 23:56:43.184543 systemd[1]: Started sshd@17-10.128.0.4:22-139.178.89.65:37596.service - OpenSSH per-connection server daemon (139.178.89.65:37596). Jul 15 23:56:43.494514 sshd[5446]: Accepted publickey for core from 139.178.89.65 port 37596 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:43.496940 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:43.505404 systemd-logind[1512]: New session 18 of user core. Jul 15 23:56:43.509642 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:56:44.419554 sshd[5448]: Connection closed by 139.178.89.65 port 37596 Jul 15 23:56:44.420792 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:44.429833 systemd[1]: sshd@17-10.128.0.4:22-139.178.89.65:37596.service: Deactivated successfully. Jul 15 23:56:44.438758 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:56:44.444107 systemd-logind[1512]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:56:44.446847 systemd-logind[1512]: Removed session 18. Jul 15 23:56:44.476255 systemd[1]: Started sshd@18-10.128.0.4:22-139.178.89.65:37602.service - OpenSSH per-connection server daemon (139.178.89.65:37602). Jul 15 23:56:44.782405 sshd[5467]: Accepted publickey for core from 139.178.89.65 port 37602 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:44.784752 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:44.793792 systemd-logind[1512]: New session 19 of user core. Jul 15 23:56:44.803772 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:56:45.234765 sshd[5469]: Connection closed by 139.178.89.65 port 37602 Jul 15 23:56:45.236254 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:45.246585 systemd[1]: sshd@18-10.128.0.4:22-139.178.89.65:37602.service: Deactivated successfully. Jul 15 23:56:45.251292 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:56:45.253962 systemd-logind[1512]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:56:45.256596 systemd-logind[1512]: Removed session 19. Jul 15 23:56:45.294104 systemd[1]: Started sshd@19-10.128.0.4:22-139.178.89.65:37614.service - OpenSSH per-connection server daemon (139.178.89.65:37614). Jul 15 23:56:45.608253 sshd[5478]: Accepted publickey for core from 139.178.89.65 port 37614 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:45.610680 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:45.618966 systemd-logind[1512]: New session 20 of user core. Jul 15 23:56:45.625644 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:56:45.912768 sshd[5480]: Connection closed by 139.178.89.65 port 37614 Jul 15 23:56:45.914095 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:45.920191 systemd-logind[1512]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:56:45.920937 systemd[1]: sshd@19-10.128.0.4:22-139.178.89.65:37614.service: Deactivated successfully. Jul 15 23:56:45.925727 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:56:45.930173 systemd-logind[1512]: Removed session 20. Jul 15 23:56:50.970545 systemd[1]: Started sshd@20-10.128.0.4:22-139.178.89.65:35792.service - OpenSSH per-connection server daemon (139.178.89.65:35792). Jul 15 23:56:51.301374 sshd[5492]: Accepted publickey for core from 139.178.89.65 port 35792 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:51.303468 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:51.311568 systemd-logind[1512]: New session 21 of user core. Jul 15 23:56:51.318592 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:56:51.586719 containerd[1534]: time="2025-07-15T23:56:51.586567455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9149e8538e849b28aea044dc77cb465941966ef92d596db56d73ce717054df80\" id:\"052d511e272417f303c79a35dcca5f5e9169c3a535a432fd515765f1b19a09ef\" pid:5517 exited_at:{seconds:1752623811 nanos:585963170}" Jul 15 23:56:51.633611 sshd[5496]: Connection closed by 139.178.89.65 port 35792 Jul 15 23:56:51.634645 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:51.641397 systemd[1]: sshd@20-10.128.0.4:22-139.178.89.65:35792.service: Deactivated successfully. Jul 15 23:56:51.645276 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:56:51.646734 systemd-logind[1512]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:56:51.649832 systemd-logind[1512]: Removed session 21. Jul 15 23:56:56.689375 systemd[1]: Started sshd@21-10.128.0.4:22-139.178.89.65:35806.service - OpenSSH per-connection server daemon (139.178.89.65:35806). Jul 15 23:56:57.000662 sshd[5531]: Accepted publickey for core from 139.178.89.65 port 35806 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:56:57.002260 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:56:57.011786 systemd-logind[1512]: New session 22 of user core. Jul 15 23:56:57.017589 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:56:57.347015 sshd[5533]: Connection closed by 139.178.89.65 port 35806 Jul 15 23:56:57.347723 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Jul 15 23:56:57.353215 systemd[1]: sshd@21-10.128.0.4:22-139.178.89.65:35806.service: Deactivated successfully. Jul 15 23:56:57.356638 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:56:57.359904 systemd-logind[1512]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:56:57.361964 systemd-logind[1512]: Removed session 22. Jul 15 23:57:00.860187 containerd[1534]: time="2025-07-15T23:57:00.860103256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" id:\"e7d46de779303b16e2bad1b5314339853beac282e2e540fd2e57250e2922ab99\" pid:5559 exited_at:{seconds:1752623820 nanos:858841113}" Jul 15 23:57:02.096229 containerd[1534]: time="2025-07-15T23:57:02.096159894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5dad8e37e5e4274d49a8809525f282663d09e48a156589b75325b8a6c23fd71\" id:\"42fe71f159c591ee53c86af2db94d486d8960fbd848da3c22575fe7c2fee84d6\" pid:5580 exited_at:{seconds:1752623822 nanos:95049336}" Jul 15 23:57:02.408520 systemd[1]: Started sshd@22-10.128.0.4:22-139.178.89.65:38988.service - OpenSSH per-connection server daemon (139.178.89.65:38988). Jul 15 23:57:02.747832 sshd[5591]: Accepted publickey for core from 139.178.89.65 port 38988 ssh2: RSA SHA256:zCIIJYjxbL8whX73/aYi08rl7llnLzYVvV4lTzbvFXU Jul 15 23:57:02.750264 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:57:02.761596 systemd-logind[1512]: New session 23 of user core. Jul 15 23:57:02.766685 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:57:03.099346 sshd[5593]: Connection closed by 139.178.89.65 port 38988 Jul 15 23:57:03.100405 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Jul 15 23:57:03.109251 systemd[1]: sshd@22-10.128.0.4:22-139.178.89.65:38988.service: Deactivated successfully. Jul 15 23:57:03.114486 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:57:03.117352 systemd-logind[1512]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:57:03.121492 systemd-logind[1512]: Removed session 23.