Jul 7 00:16:16.186283 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:16:16.186334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:16:16.186352 kernel: BIOS-provided physical RAM map: Jul 7 00:16:16.186367 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 7 00:16:16.186384 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 7 00:16:16.186398 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 7 00:16:16.186419 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 7 00:16:16.186434 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 7 00:16:16.186449 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Jul 7 00:16:16.186467 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Jul 7 00:16:16.186482 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Jul 7 00:16:16.186497 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 7 00:16:16.186511 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 7 00:16:16.186535 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 7 00:16:16.186557 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 7 00:16:16.186574 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 7 00:16:16.186590 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 7 00:16:16.186604 kernel: NX (Execute Disable) protection: active Jul 7 00:16:16.186621 kernel: APIC: Static calls initialized Jul 7 00:16:16.187690 kernel: efi: EFI v2.7 by EDK II Jul 7 00:16:16.187708 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Jul 7 00:16:16.187723 kernel: random: crng init done Jul 7 00:16:16.187745 kernel: secureboot: Secure boot disabled Jul 7 00:16:16.187760 kernel: SMBIOS 2.4 present. Jul 7 00:16:16.187776 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Jul 7 00:16:16.187790 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:16:16.187814 kernel: Hypervisor detected: KVM Jul 7 00:16:16.187830 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:16:16.187852 kernel: kvm-clock: using sched offset of 14947353541 cycles Jul 7 00:16:16.187871 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:16:16.187886 kernel: tsc: Detected 2299.998 MHz processor Jul 7 00:16:16.187902 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:16:16.187923 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:16:16.187937 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 7 00:16:16.187951 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jul 7 00:16:16.187967 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:16:16.187982 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 7 00:16:16.187997 kernel: Using GB pages for direct mapping Jul 7 00:16:16.188013 kernel: ACPI: Early table checksum verification disabled Jul 7 00:16:16.188027 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 7 00:16:16.188053 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 7 00:16:16.188070 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 7 00:16:16.188087 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 7 00:16:16.188104 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 7 00:16:16.188122 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Jul 7 00:16:16.188139 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 7 00:16:16.188160 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 7 00:16:16.188177 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 7 00:16:16.188195 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 7 00:16:16.188212 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 7 00:16:16.188230 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 7 00:16:16.188246 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 7 00:16:16.188264 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 7 00:16:16.188281 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 7 00:16:16.188298 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 7 00:16:16.188319 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 7 00:16:16.188337 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 7 00:16:16.188355 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 7 00:16:16.188372 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 7 00:16:16.188388 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 00:16:16.188406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 7 00:16:16.188424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 7 00:16:16.188441 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Jul 7 00:16:16.188459 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Jul 7 00:16:16.188481 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Jul 7 00:16:16.188497 kernel: Zone ranges: Jul 7 00:16:16.188514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:16:16.188529 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:16:16.188546 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 7 00:16:16.188562 kernel: Device empty Jul 7 00:16:16.188579 kernel: Movable zone start for each node Jul 7 00:16:16.188595 kernel: Early memory node ranges Jul 7 00:16:16.188613 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 7 00:16:16.189192 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 7 00:16:16.189220 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Jul 7 00:16:16.189237 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Jul 7 00:16:16.189254 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 7 00:16:16.189272 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 7 00:16:16.189289 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 7 00:16:16.189311 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:16:16.189329 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 7 00:16:16.189346 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 7 00:16:16.189361 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jul 7 00:16:16.189387 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 00:16:16.189403 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 7 00:16:16.189419 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 00:16:16.189435 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:16:16.189455 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:16:16.189474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:16:16.189497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:16:16.189514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:16:16.189529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:16:16.189549 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:16:16.189565 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:16:16.189581 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:16:16.189597 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:16:16.189613 kernel: CPU topo: Max. threads per core: 2 Jul 7 00:16:16.189659 kernel: CPU topo: Num. cores per package: 1 Jul 7 00:16:16.189676 kernel: CPU topo: Num. threads per package: 2 Jul 7 00:16:16.189693 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 00:16:16.189711 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 7 00:16:16.189733 kernel: Booting paravirtualized kernel on KVM Jul 7 00:16:16.189751 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:16:16.189769 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:16:16.189786 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 00:16:16.189803 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 00:16:16.189819 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:16:16.189834 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:16:16.189857 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:16:16.189877 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:16:16.189899 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:16:16.189916 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 7 00:16:16.189934 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:16:16.189952 kernel: Fallback order for Node 0: 0 Jul 7 00:16:16.189969 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Jul 7 00:16:16.189984 kernel: Policy zone: Normal Jul 7 00:16:16.189998 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:16:16.190015 kernel: software IO TLB: area num 2. Jul 7 00:16:16.190049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:16:16.190064 kernel: Kernel/User page tables isolation: enabled Jul 7 00:16:16.190083 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:16:16.190104 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:16:16.190123 kernel: Dynamic Preempt: voluntary Jul 7 00:16:16.190142 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:16:16.190166 kernel: rcu: RCU event tracing is enabled. Jul 7 00:16:16.190186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:16:16.190208 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:16:16.190227 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:16:16.190245 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:16:16.190261 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:16:16.190278 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:16:16.190298 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:16:16.190315 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:16:16.190334 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:16:16.190352 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:16:16.190374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:16:16.190410 kernel: Console: colour dummy device 80x25 Jul 7 00:16:16.190430 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:16:16.190448 kernel: ACPI: Core revision 20240827 Jul 7 00:16:16.190467 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:16:16.190485 kernel: x2apic enabled Jul 7 00:16:16.190503 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:16:16.190522 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 7 00:16:16.190540 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 7 00:16:16.190563 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 7 00:16:16.190583 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 7 00:16:16.190601 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 7 00:16:16.190618 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:16:16.190688 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 7 00:16:16.190707 kernel: Spectre V2 : Mitigation: IBRS Jul 7 00:16:16.190725 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:16:16.190743 kernel: RETBleed: Mitigation: IBRS Jul 7 00:16:16.190762 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:16:16.190791 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jul 7 00:16:16.190810 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:16:16.190828 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 00:16:16.190858 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:16:16.190877 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:16:16.190898 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:16:16.190917 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:16:16.190936 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:16:16.190963 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:16:16.190982 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 00:16:16.191005 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:16:16.191023 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:16:16.191047 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:16:16.191065 kernel: landlock: Up and running. Jul 7 00:16:16.191083 kernel: SELinux: Initializing. Jul 7 00:16:16.191101 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:16:16.191119 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:16:16.191142 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 7 00:16:16.191161 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 7 00:16:16.191179 kernel: signal: max sigframe size: 1776 Jul 7 00:16:16.191197 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:16:16.191216 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:16:16.191235 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:16:16.191253 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:16:16.191271 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:16:16.191290 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:16:16.191313 kernel: .... node #0, CPUs: #1 Jul 7 00:16:16.191332 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 00:16:16.191352 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:16:16.191369 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:16:16.191387 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 7 00:16:16.191406 kernel: Memory: 7564012K/7860552K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 290708K reserved, 0K cma-reserved) Jul 7 00:16:16.191424 kernel: devtmpfs: initialized Jul 7 00:16:16.191442 kernel: x86/mm: Memory block size: 128MB Jul 7 00:16:16.191461 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 7 00:16:16.191484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:16:16.191502 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:16:16.191520 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:16:16.191538 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:16:16.191556 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:16:16.191574 kernel: audit: type=2000 audit(1751847371.712:1): state=initialized audit_enabled=0 res=1 Jul 7 00:16:16.191592 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:16:16.191610 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:16:16.191657 kernel: cpuidle: using governor menu Jul 7 00:16:16.191676 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:16:16.191693 kernel: dca service started, version 1.12.1 Jul 7 00:16:16.191709 kernel: PCI: Using configuration type 1 for base access Jul 7 00:16:16.191726 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:16:16.191743 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:16:16.191761 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:16:16.191777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:16:16.191794 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:16:16.191817 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:16:16.191835 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:16:16.191859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:16:16.191877 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 00:16:16.191894 kernel: ACPI: Interpreter enabled Jul 7 00:16:16.191911 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:16:16.191929 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:16:16.191946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:16:16.191963 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 00:16:16.191985 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 7 00:16:16.192003 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:16:16.192306 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:16:16.192525 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:16:16.193730 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:16:16.193763 kernel: PCI host bridge to bus 0000:00 Jul 7 00:16:16.193980 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:16:16.194160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:16:16.194320 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:16:16.194485 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 7 00:16:16.194662 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:16:16.194876 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:16:16.195073 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:16:16.195265 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 7 00:16:16.195461 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 00:16:16.197657 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Jul 7 00:16:16.197894 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jul 7 00:16:16.198101 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Jul 7 00:16:16.198298 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:16:16.198490 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Jul 7 00:16:16.200764 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Jul 7 00:16:16.201002 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:16:16.201198 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Jul 7 00:16:16.201391 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Jul 7 00:16:16.201414 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:16:16.201431 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:16:16.201453 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:16:16.201477 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:16:16.201497 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:16:16.201518 kernel: iommu: Default domain type: Translated Jul 7 00:16:16.201537 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:16:16.201556 kernel: efivars: Registered efivars operations Jul 7 00:16:16.201578 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:16:16.201596 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:16:16.201614 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 7 00:16:16.201665 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 7 00:16:16.201688 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Jul 7 00:16:16.201705 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 7 00:16:16.201722 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 7 00:16:16.201739 kernel: vgaarb: loaded Jul 7 00:16:16.201757 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:16:16.201775 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:16:16.201794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:16:16.201812 kernel: pnp: PnP ACPI init Jul 7 00:16:16.201831 kernel: pnp: PnP ACPI: found 7 devices Jul 7 00:16:16.201857 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:16:16.201879 kernel: NET: Registered PF_INET protocol family Jul 7 00:16:16.201898 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:16:16.201917 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 7 00:16:16.201935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:16:16.201954 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:16:16.201972 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 00:16:16.201991 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 7 00:16:16.202009 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 00:16:16.202033 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 00:16:16.202055 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:16:16.202074 kernel: NET: Registered PF_XDP protocol family Jul 7 00:16:16.202262 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:16:16.202428 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:16:16.202593 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:16:16.203880 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 7 00:16:16.204089 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:16:16.204120 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:16:16.204139 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:16:16.204156 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jul 7 00:16:16.204174 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 00:16:16.204192 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 7 00:16:16.204209 kernel: clocksource: Switched to clocksource tsc Jul 7 00:16:16.204226 kernel: Initialise system trusted keyrings Jul 7 00:16:16.204243 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 7 00:16:16.204262 kernel: Key type asymmetric registered Jul 7 00:16:16.204293 kernel: Asymmetric key parser 'x509' registered Jul 7 00:16:16.204312 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:16:16.204330 kernel: io scheduler mq-deadline registered Jul 7 00:16:16.204348 kernel: io scheduler kyber registered Jul 7 00:16:16.204367 kernel: io scheduler bfq registered Jul 7 00:16:16.204384 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:16:16.204402 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 00:16:16.204602 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 7 00:16:16.204624 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 7 00:16:16.205906 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 7 00:16:16.205933 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 00:16:16.206134 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 7 00:16:16.206158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:16:16.206178 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:16:16.206197 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 00:16:16.206214 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 7 00:16:16.206231 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 7 00:16:16.206436 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 7 00:16:16.206468 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:16:16.206487 kernel: i8042: Warning: Keylock active Jul 7 00:16:16.206506 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:16:16.206526 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:16:16.206748 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 00:16:16.206933 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 00:16:16.207113 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T00:16:15 UTC (1751847375) Jul 7 00:16:16.207296 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 00:16:16.207320 kernel: intel_pstate: CPU model not supported Jul 7 00:16:16.207339 kernel: pstore: Using crash dump compression: deflate Jul 7 00:16:16.207358 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:16:16.207377 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:16:16.207395 kernel: Segment Routing with IPv6 Jul 7 00:16:16.207414 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:16:16.207433 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:16:16.207451 kernel: Key type dns_resolver registered Jul 7 00:16:16.207473 kernel: IPI shorthand broadcast: enabled Jul 7 00:16:16.207492 kernel: sched_clock: Marking stable (3986005441, 139480680)->(4161914006, -36427885) Jul 7 00:16:16.207511 kernel: registered taskstats version 1 Jul 7 00:16:16.207529 kernel: Loading compiled-in X.509 certificates Jul 7 00:16:16.207548 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:16:16.207566 kernel: Demotion targets for Node 0: null Jul 7 00:16:16.207585 kernel: Key type .fscrypt registered Jul 7 00:16:16.207601 kernel: Key type fscrypt-provisioning registered Jul 7 00:16:16.207620 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:16:16.209680 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:16:16.209701 kernel: ima: No architecture policies found Jul 7 00:16:16.209720 kernel: clk: Disabling unused clocks Jul 7 00:16:16.209739 kernel: Warning: unable to open an initial console. Jul 7 00:16:16.209757 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:16:16.209776 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:16:16.209795 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:16:16.209814 kernel: Run /init as init process Jul 7 00:16:16.209846 kernel: with arguments: Jul 7 00:16:16.209865 kernel: /init Jul 7 00:16:16.209883 kernel: with environment: Jul 7 00:16:16.209901 kernel: HOME=/ Jul 7 00:16:16.209918 kernel: TERM=linux Jul 7 00:16:16.209937 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:16:16.209958 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:16:16.209982 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:16:16.210007 systemd[1]: Detected virtualization google. Jul 7 00:16:16.210026 systemd[1]: Detected architecture x86-64. Jul 7 00:16:16.210045 systemd[1]: Running in initrd. Jul 7 00:16:16.210062 systemd[1]: No hostname configured, using default hostname. Jul 7 00:16:16.210083 systemd[1]: Hostname set to . Jul 7 00:16:16.210101 systemd[1]: Initializing machine ID from random generator. Jul 7 00:16:16.210121 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:16:16.210141 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:16:16.210180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:16:16.210206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:16:16.210226 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:16:16.210246 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:16:16.210272 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:16:16.210298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:16:16.210319 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:16:16.210343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:16:16.210363 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:16:16.210388 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:16:16.210428 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:16:16.210452 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:16:16.210473 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:16:16.210497 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:16:16.210518 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:16:16.210537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:16:16.210558 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:16:16.210579 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:16:16.210604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:16:16.210974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:16:16.211000 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:16:16.211026 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:16:16.211047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:16:16.211067 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:16:16.211088 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:16:16.211108 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:16:16.211128 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:16:16.211149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:16:16.211169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:16:16.211189 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:16:16.211215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:16:16.211274 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 00:16:16.211334 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:16:16.211355 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:16:16.211378 systemd-journald[207]: Journal started Jul 7 00:16:16.211428 systemd-journald[207]: Runtime Journal (/run/log/journal/c307d02357674e5daacaecfb9f5b8487) is 8M, max 148.9M, 140.9M free. Jul 7 00:16:16.187863 systemd-modules-load[208]: Inserted module 'overlay' Jul 7 00:16:16.213968 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:16:16.218686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:16:16.240460 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:16:16.245828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:16:16.257042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:16:16.257685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:16:16.264112 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:16:16.270792 kernel: Bridge firewalling registered Jul 7 00:16:16.267587 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:16:16.271971 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 7 00:16:16.274381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:16:16.286285 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:16:16.290314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:16:16.297444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:16:16.319316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:16:16.323167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:16:16.330731 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:16:16.346732 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:16:16.368525 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:16:16.423721 systemd-resolved[247]: Positive Trust Anchors: Jul 7 00:16:16.424133 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:16:16.424208 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:16:16.429127 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 7 00:16:16.430867 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:16:16.442906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:16:16.494690 kernel: SCSI subsystem initialized Jul 7 00:16:16.507684 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:16:16.519678 kernel: iscsi: registered transport (tcp) Jul 7 00:16:16.545688 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:16:16.545788 kernel: QLogic iSCSI HBA Driver Jul 7 00:16:16.570514 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:16:16.592445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:16:16.599737 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:16:16.663094 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:16:16.665267 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:16:16.726668 kernel: raid6: avx2x4 gen() 17865 MB/s Jul 7 00:16:16.743672 kernel: raid6: avx2x2 gen() 17904 MB/s Jul 7 00:16:16.761059 kernel: raid6: avx2x1 gen() 13839 MB/s Jul 7 00:16:16.761133 kernel: raid6: using algorithm avx2x2 gen() 17904 MB/s Jul 7 00:16:16.779119 kernel: raid6: .... xor() 18479 MB/s, rmw enabled Jul 7 00:16:16.779178 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:16:16.801665 kernel: xor: automatically using best checksumming function avx Jul 7 00:16:16.984672 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:16:16.993801 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:16:16.997155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:16:17.028567 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 7 00:16:17.037543 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:16:17.043201 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:16:17.074946 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 7 00:16:17.108519 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:16:17.114619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:16:17.207545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:16:17.213021 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:16:17.327663 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:16:17.369975 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Jul 7 00:16:17.375435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:16:17.375657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:16:17.382039 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:16:17.436659 kernel: AES CTR mode by8 optimization enabled Jul 7 00:16:17.437902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:16:17.474606 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 00:16:17.474728 kernel: scsi host0: Virtio SCSI HBA Jul 7 00:16:17.466140 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:16:17.478350 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 7 00:16:17.531048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:16:17.539813 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 7 00:16:17.540052 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 7 00:16:17.540205 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 7 00:16:17.540352 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 7 00:16:17.540515 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 00:16:17.552733 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:16:17.552822 kernel: GPT:17805311 != 25165823 Jul 7 00:16:17.552847 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:16:17.554112 kernel: GPT:17805311 != 25165823 Jul 7 00:16:17.554141 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:16:17.555663 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:16:17.557772 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 7 00:16:17.641532 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jul 7 00:16:17.642184 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:16:17.671071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 7 00:16:17.692899 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jul 7 00:16:17.704233 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jul 7 00:16:17.704493 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jul 7 00:16:17.710291 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:16:17.714967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:16:17.719997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:16:17.727142 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:16:17.741522 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:16:17.754652 disk-uuid[612]: Primary Header is updated. Jul 7 00:16:17.754652 disk-uuid[612]: Secondary Entries is updated. Jul 7 00:16:17.754652 disk-uuid[612]: Secondary Header is updated. Jul 7 00:16:17.771574 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:16:17.778659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:16:17.803671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:16:18.816704 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 00:16:18.818714 disk-uuid[613]: The operation has completed successfully. Jul 7 00:16:18.904104 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:16:18.904256 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:16:18.949837 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:16:18.971098 sh[634]: Success Jul 7 00:16:18.993250 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:16:18.993351 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:16:18.993380 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:16:19.006650 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 00:16:19.099166 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:16:19.103914 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:16:19.122818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:16:19.142691 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:16:19.142772 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (646) Jul 7 00:16:19.148150 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:16:19.148221 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:16:19.148247 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:16:19.174782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:16:19.175651 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:16:19.179175 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:16:19.181591 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:16:19.190174 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:16:19.229668 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (679) Jul 7 00:16:19.234015 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:16:19.234086 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:16:19.234113 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:16:19.247669 kernel: BTRFS info (device sda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:16:19.248293 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:16:19.254360 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:16:19.363305 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:16:19.379929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:16:19.478309 systemd-networkd[815]: lo: Link UP Jul 7 00:16:19.478754 systemd-networkd[815]: lo: Gained carrier Jul 7 00:16:19.481698 systemd-networkd[815]: Enumeration completed Jul 7 00:16:19.481842 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:16:19.482374 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:16:19.482381 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:16:19.485769 systemd-networkd[815]: eth0: Link UP Jul 7 00:16:19.485777 systemd-networkd[815]: eth0: Gained carrier Jul 7 00:16:19.485792 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:16:19.488882 systemd[1]: Reached target network.target - Network. Jul 7 00:16:19.500770 systemd-networkd[815]: eth0: DHCPv4 address 10.128.0.74/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 7 00:16:19.525149 ignition[740]: Ignition 2.21.0 Jul 7 00:16:19.525472 ignition[740]: Stage: fetch-offline Jul 7 00:16:19.525538 ignition[740]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:19.528535 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:16:19.525554 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:19.530944 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:16:19.525727 ignition[740]: parsed url from cmdline: "" Jul 7 00:16:19.525734 ignition[740]: no config URL provided Jul 7 00:16:19.525744 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:16:19.525767 ignition[740]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:16:19.525779 ignition[740]: failed to fetch config: resource requires networking Jul 7 00:16:19.526163 ignition[740]: Ignition finished successfully Jul 7 00:16:19.565504 ignition[825]: Ignition 2.21.0 Jul 7 00:16:19.565523 ignition[825]: Stage: fetch Jul 7 00:16:19.565796 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:19.565814 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:19.565967 ignition[825]: parsed url from cmdline: "" Jul 7 00:16:19.565974 ignition[825]: no config URL provided Jul 7 00:16:19.565983 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:16:19.580182 unknown[825]: fetched base config from "system" Jul 7 00:16:19.565998 ignition[825]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:16:19.580193 unknown[825]: fetched base config from "system" Jul 7 00:16:19.566045 ignition[825]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 7 00:16:19.580202 unknown[825]: fetched user config from "gcp" Jul 7 00:16:19.573133 ignition[825]: GET result: OK Jul 7 00:16:19.584030 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:16:19.573359 ignition[825]: parsing config with SHA512: 01fc860ea652adc0de6f2d601e76b8ef5b6470777e4d1e9df1ef1df743bc7c3011a27686847c97752cbcc259532651ec700a995b2b97d6dd0ad29bc2b86dde2d Jul 7 00:16:19.590419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:16:19.580781 ignition[825]: fetch: fetch complete Jul 7 00:16:19.580788 ignition[825]: fetch: fetch passed Jul 7 00:16:19.580842 ignition[825]: Ignition finished successfully Jul 7 00:16:19.632018 ignition[832]: Ignition 2.21.0 Jul 7 00:16:19.632036 ignition[832]: Stage: kargs Jul 7 00:16:19.632308 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:19.636519 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:16:19.632325 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:19.642295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:16:19.634196 ignition[832]: kargs: kargs passed Jul 7 00:16:19.634363 ignition[832]: Ignition finished successfully Jul 7 00:16:19.680397 ignition[839]: Ignition 2.21.0 Jul 7 00:16:19.680414 ignition[839]: Stage: disks Jul 7 00:16:19.680657 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:19.684404 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:16:19.680675 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:19.686696 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:16:19.683077 ignition[839]: disks: disks passed Jul 7 00:16:19.690090 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:16:19.683170 ignition[839]: Ignition finished successfully Jul 7 00:16:19.695055 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:16:19.700033 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:16:19.704037 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:16:19.709515 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:16:19.749776 systemd-fsck[848]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 7 00:16:19.764196 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:16:19.767332 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:16:19.955650 kernel: EXT4-fs (sda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:16:19.957281 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:16:19.960528 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:16:19.966151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:16:19.981451 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:16:19.983292 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:16:19.983379 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:16:19.983420 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:16:20.002451 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (856) Jul 7 00:16:20.002494 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:16:20.004842 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:16:20.004909 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:16:20.006234 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:16:20.008778 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:16:20.016090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:16:20.136613 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:16:20.145246 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:16:20.154773 initrd-setup-root[894]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:16:20.161400 initrd-setup-root[901]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:16:20.327017 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:16:20.329402 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:16:20.337401 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:16:20.355212 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:16:20.357915 kernel: BTRFS info (device sda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:16:20.389350 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:16:20.397710 ignition[970]: INFO : Ignition 2.21.0 Jul 7 00:16:20.397710 ignition[970]: INFO : Stage: mount Jul 7 00:16:20.401003 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:20.401003 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:20.401003 ignition[970]: INFO : mount: mount passed Jul 7 00:16:20.401003 ignition[970]: INFO : Ignition finished successfully Jul 7 00:16:20.400777 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:16:20.406955 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:16:20.435745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:16:20.465685 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (983) Jul 7 00:16:20.468425 kernel: BTRFS info (device sda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:16:20.468490 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:16:20.468522 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 00:16:20.476748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:16:20.514959 ignition[1000]: INFO : Ignition 2.21.0 Jul 7 00:16:20.514959 ignition[1000]: INFO : Stage: files Jul 7 00:16:20.521793 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:20.521793 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:20.521793 ignition[1000]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:16:20.521793 ignition[1000]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:16:20.521793 ignition[1000]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:16:20.538751 ignition[1000]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:16:20.538751 ignition[1000]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:16:20.538751 ignition[1000]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:16:20.538751 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:16:20.538751 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 00:16:20.527715 unknown[1000]: wrote ssh authorized keys file for user: core Jul 7 00:16:20.646846 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:16:21.008726 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 00:16:21.008726 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:16:21.017798 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 00:16:21.442826 systemd-networkd[815]: eth0: Gained IPv6LL Jul 7 00:16:21.541415 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:16:21.975242 ignition[1000]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 00:16:21.975242 ignition[1000]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:16:21.982790 ignition[1000]: INFO : files: files passed Jul 7 00:16:21.982790 ignition[1000]: INFO : Ignition finished successfully Jul 7 00:16:21.984607 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:16:21.989728 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:16:21.998476 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:16:22.017379 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:16:22.017516 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:16:22.046362 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:16:22.046362 initrd-setup-root-after-ignition[1030]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:16:22.055829 initrd-setup-root-after-ignition[1034]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:16:22.049984 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:16:22.051772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:16:22.058356 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:16:22.129506 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:16:22.129690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:16:22.134598 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:16:22.138035 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:16:22.142179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:16:22.144369 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:16:22.190327 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:16:22.196894 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:16:22.231904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:16:22.232473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:16:22.237447 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:16:22.242302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:16:22.242935 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:16:22.253899 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:16:22.257124 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:16:22.260229 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:16:22.264162 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:16:22.269187 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:16:22.274202 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:16:22.279304 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:16:22.283239 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:16:22.287236 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:16:22.292208 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:16:22.296313 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:16:22.300206 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:16:22.300975 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:16:22.308272 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:16:22.311385 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:16:22.316244 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:16:22.316681 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:16:22.321411 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:16:22.321957 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:16:22.330108 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:16:22.330740 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:16:22.333283 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:16:22.333612 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:16:22.342544 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:16:22.351409 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:16:22.361843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:16:22.362132 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:16:22.367199 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:16:22.367437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:16:22.393857 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:16:22.396589 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:16:22.396807 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:16:22.402506 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:16:22.407242 ignition[1054]: INFO : Ignition 2.21.0 Jul 7 00:16:22.407242 ignition[1054]: INFO : Stage: umount Jul 7 00:16:22.407242 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:16:22.407242 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 7 00:16:22.407242 ignition[1054]: INFO : umount: umount passed Jul 7 00:16:22.407242 ignition[1054]: INFO : Ignition finished successfully Jul 7 00:16:22.402713 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:16:22.412472 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:16:22.412668 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:16:22.420508 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:16:22.420593 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:16:22.421105 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:16:22.421161 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:16:22.426030 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:16:22.426212 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:16:22.430159 systemd[1]: Stopped target network.target - Network. Jul 7 00:16:22.433945 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:16:22.434148 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:16:22.440896 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:16:22.444790 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:16:22.444880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:16:22.448761 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:16:22.452784 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:16:22.456854 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:16:22.456934 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:16:22.462840 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:16:22.462925 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:16:22.466005 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:16:22.466223 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:16:22.470057 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:16:22.470258 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:16:22.474215 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:16:22.474422 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:16:22.476564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:16:22.481605 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:16:22.489179 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:16:22.489361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:16:22.496026 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:16:22.496355 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:16:22.496486 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:16:22.499202 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:16:22.500032 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:16:22.504832 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:16:22.504905 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:16:22.513257 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:16:22.523775 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:16:22.524055 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:16:22.530881 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:16:22.530966 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:16:22.537005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:16:22.537093 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:16:22.540348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:16:22.540443 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:16:22.542191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:16:22.550995 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:16:22.551336 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:16:22.558194 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:16:22.558456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:16:22.564138 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:16:22.564232 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:16:22.564533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:16:22.564723 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:16:22.572140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:16:22.572228 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:16:22.583955 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:16:22.584057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:16:22.590069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:16:22.590172 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:16:22.598553 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:16:22.608757 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:16:22.714660 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 00:16:22.608872 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:16:22.614169 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:16:22.614275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:16:22.621014 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 00:16:22.621109 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:16:22.625268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:16:22.625490 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:16:22.633884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:16:22.633982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:16:22.641780 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:16:22.641849 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 00:16:22.641893 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:16:22.641939 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:16:22.642432 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:16:22.642548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:16:22.644464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:16:22.644603 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:16:22.649050 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:16:22.657415 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:16:22.685463 systemd[1]: Switching root. Jul 7 00:16:22.778778 systemd-journald[207]: Journal stopped Jul 7 00:16:24.830812 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:16:24.830893 kernel: SELinux: policy capability open_perms=1 Jul 7 00:16:24.830919 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:16:24.830939 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:16:24.830959 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:16:24.830979 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:16:24.831014 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:16:24.831037 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:16:24.831057 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:16:24.831077 kernel: audit: type=1403 audit(1751847383.293:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:16:24.831103 systemd[1]: Successfully loaded SELinux policy in 48.169ms. Jul 7 00:16:24.831128 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.991ms. Jul 7 00:16:24.831154 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:16:24.831183 systemd[1]: Detected virtualization google. Jul 7 00:16:24.831207 systemd[1]: Detected architecture x86-64. Jul 7 00:16:24.831231 systemd[1]: Detected first boot. Jul 7 00:16:24.831255 systemd[1]: Initializing machine ID from random generator. Jul 7 00:16:24.831277 zram_generator::config[1098]: No configuration found. Jul 7 00:16:24.831307 kernel: Guest personality initialized and is inactive Jul 7 00:16:24.831331 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:16:24.831352 kernel: Initialized host personality Jul 7 00:16:24.831375 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:16:24.831399 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:16:24.831423 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:16:24.831447 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:16:24.831474 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:16:24.831497 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:16:24.831520 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:16:24.831543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:16:24.831569 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:16:24.831593 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:16:24.831617 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:16:24.833186 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:16:24.833222 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:16:24.833246 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:16:24.833268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:16:24.833318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:16:24.833365 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:16:24.833388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:16:24.833418 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:16:24.833452 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:16:24.833481 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:16:24.833507 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:16:24.833531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:16:24.833556 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:16:24.833581 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:16:24.833605 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:16:24.835409 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:16:24.835477 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:16:24.835506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:16:24.835532 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:16:24.835558 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:16:24.835583 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:16:24.835607 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:16:24.835669 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:16:24.835703 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:16:24.835727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:16:24.835752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:16:24.835777 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:16:24.835802 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:16:24.835827 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:16:24.835856 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:16:24.835882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:24.835907 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:16:24.835931 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:16:24.835957 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:16:24.835983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:16:24.836017 systemd[1]: Reached target machines.target - Containers. Jul 7 00:16:24.836043 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:16:24.836074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:16:24.836100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:16:24.836125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:16:24.836150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:16:24.836174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:16:24.836200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:16:24.836225 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:16:24.836250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:16:24.836275 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:16:24.836305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:16:24.836332 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:16:24.836357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:16:24.836383 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:16:24.836408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:16:24.836433 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:16:24.836459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:16:24.836484 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:16:24.836514 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:16:24.836540 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:16:24.836565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:16:24.836591 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:16:24.836616 systemd[1]: Stopped verity-setup.service. Jul 7 00:16:24.840282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:24.840321 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:16:24.840346 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:16:24.840378 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:16:24.840401 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:16:24.840663 kernel: fuse: init (API version 7.41) Jul 7 00:16:24.840698 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:16:24.840724 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:16:24.840750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:16:24.840780 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:16:24.840805 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:16:24.840831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:16:24.840863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:16:24.840889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:16:24.840917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:16:24.840942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:16:24.840967 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:16:24.840992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:16:24.841024 kernel: ACPI: bus type drm_connector registered Jul 7 00:16:24.841048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:16:24.841078 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:16:24.841104 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:16:24.841130 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:16:24.841210 systemd-journald[1172]: Collecting audit messages is disabled. Jul 7 00:16:24.841266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:16:24.841291 kernel: loop: module loaded Jul 7 00:16:24.841315 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:16:24.841351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:16:24.841382 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:16:24.841409 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:16:24.841435 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:16:24.841465 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:16:24.841490 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:16:24.841516 systemd-journald[1172]: Journal started Jul 7 00:16:24.841563 systemd-journald[1172]: Runtime Journal (/run/log/journal/752c8d8c9fa7468cb680959a1de9cc00) is 8M, max 148.9M, 140.9M free. Jul 7 00:16:24.845239 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:16:24.204235 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:16:24.220809 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 00:16:24.221435 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:16:24.849802 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:16:24.887393 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:16:24.891849 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:16:24.891911 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:16:24.895847 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:16:24.902984 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:16:24.908005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:16:24.911655 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 7 00:16:24.911691 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 7 00:16:24.912412 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:16:24.918914 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:16:24.921825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:16:24.925948 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:16:24.929821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:16:24.933941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:16:24.939938 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:16:24.943782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:16:24.954890 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:16:25.020998 systemd-journald[1172]: Time spent on flushing to /var/log/journal/752c8d8c9fa7468cb680959a1de9cc00 is 94.027ms for 959 entries. Jul 7 00:16:25.020998 systemd-journald[1172]: System Journal (/var/log/journal/752c8d8c9fa7468cb680959a1de9cc00) is 8M, max 584.8M, 576.8M free. Jul 7 00:16:25.164777 systemd-journald[1172]: Received client request to flush runtime journal. Jul 7 00:16:25.164862 kernel: loop0: detected capacity change from 0 to 113872 Jul 7 00:16:25.164901 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:16:25.172207 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 00:16:25.030793 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:16:25.035325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:16:25.041207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:16:25.054188 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:16:25.057709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:16:25.149762 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:16:25.177376 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:16:25.187721 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:16:25.193423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:16:25.224981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:16:25.254018 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 7 00:16:25.254057 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 7 00:16:25.263869 kernel: loop2: detected capacity change from 0 to 52072 Jul 7 00:16:25.266336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:16:25.333763 kernel: loop3: detected capacity change from 0 to 224512 Jul 7 00:16:25.438691 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 00:16:25.476814 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 00:16:25.542664 kernel: loop6: detected capacity change from 0 to 52072 Jul 7 00:16:25.577788 kernel: loop7: detected capacity change from 0 to 224512 Jul 7 00:16:25.623009 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jul 7 00:16:25.624095 (sd-merge)[1246]: Merged extensions into '/usr'. Jul 7 00:16:25.637996 systemd[1]: Reload requested from client PID 1224 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:16:25.638167 systemd[1]: Reloading... Jul 7 00:16:25.761664 zram_generator::config[1268]: No configuration found. Jul 7 00:16:26.074488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:16:26.143658 ldconfig[1219]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:16:26.285546 systemd[1]: Reloading finished in 646 ms. Jul 7 00:16:26.302958 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:16:26.306451 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:16:26.318493 systemd[1]: Starting ensure-sysext.service... Jul 7 00:16:26.325895 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:16:26.356579 systemd[1]: Reload requested from client PID 1312 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:16:26.356610 systemd[1]: Reloading... Jul 7 00:16:26.393212 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:16:26.393269 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:16:26.394321 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:16:26.394923 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:16:26.396820 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:16:26.397386 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Jul 7 00:16:26.397498 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Jul 7 00:16:26.405504 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:16:26.405742 systemd-tmpfiles[1313]: Skipping /boot Jul 7 00:16:26.450060 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:16:26.451859 systemd-tmpfiles[1313]: Skipping /boot Jul 7 00:16:26.539670 zram_generator::config[1346]: No configuration found. Jul 7 00:16:26.660261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:16:26.777290 systemd[1]: Reloading finished in 419 ms. Jul 7 00:16:26.806838 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:16:26.829145 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:16:26.843095 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:16:26.850353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:16:26.857060 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:16:26.867382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:16:26.873803 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:16:26.883623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:16:26.895982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:26.896570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:16:26.901110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:16:26.905982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:16:26.918884 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:16:26.921920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:16:26.922162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:16:26.926399 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:16:26.929730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:26.934283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:16:26.936929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:16:26.942025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:16:26.942733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:16:26.947168 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:16:26.948338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:16:26.971336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:26.972169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:16:26.976674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:16:26.984011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:16:26.991561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:16:26.995001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:16:26.996030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:16:26.996329 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:27.015439 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:16:27.023427 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:27.025093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:16:27.033729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:16:27.040955 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 00:16:27.043964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:16:27.045823 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:16:27.046119 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:16:27.048910 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:16:27.062911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:16:27.068736 systemd[1]: Finished ensure-sysext.service. Jul 7 00:16:27.081316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:16:27.081742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:16:27.089183 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:16:27.089671 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:16:27.101558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:16:27.106346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:16:27.107750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:16:27.134289 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:16:27.135808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:16:27.142070 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:16:27.154330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:16:27.159012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:16:27.175408 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Jul 7 00:16:27.188139 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 00:16:27.200028 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jul 7 00:16:27.209365 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:16:27.219252 augenrules[1436]: No rules Jul 7 00:16:27.220283 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:16:27.220785 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:16:27.232536 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:16:27.238291 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:16:27.257463 systemd-resolved[1386]: Positive Trust Anchors: Jul 7 00:16:27.257688 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:16:27.257760 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:16:27.267420 systemd-resolved[1386]: Defaulting to hostname 'linux'. Jul 7 00:16:27.270447 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:16:27.280351 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:16:27.292367 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:16:27.303837 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:16:27.313990 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:16:27.323901 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:16:27.334701 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:16:27.345129 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:16:27.355104 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:16:27.366828 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:16:27.376824 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:16:27.376893 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:16:27.384805 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:16:27.395697 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:16:27.409228 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:16:27.426593 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:16:27.439060 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:16:27.449826 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:16:27.459397 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:16:27.476910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:16:27.487753 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jul 7 00:16:27.497113 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:16:27.530911 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jul 7 00:16:27.537114 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 7 00:16:27.560848 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:16:27.569846 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:16:27.578829 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:16:27.586919 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:16:27.586979 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:16:27.593480 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:16:27.608325 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:16:27.622749 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:16:27.653140 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:16:27.665782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:16:27.674834 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:16:27.688491 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:16:27.714663 jq[1491]: false Jul 7 00:16:27.713703 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:16:27.729959 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 00:16:27.747387 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:16:27.768667 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:16:27.767046 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:16:27.780985 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:16:27.794132 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Refreshing passwd entry cache Jul 7 00:16:27.793883 oslogin_cache_refresh[1495]: Refreshing passwd entry cache Jul 7 00:16:27.799758 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Failure getting users, quitting Jul 7 00:16:27.799758 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:16:27.799758 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Refreshing group entry cache Jul 7 00:16:27.799758 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Failure getting groups, quitting Jul 7 00:16:27.799758 google_oslogin_nss_cache[1495]: oslogin_cache_refresh[1495]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:16:27.799000 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:16:27.797034 oslogin_cache_refresh[1495]: Failure getting users, quitting Jul 7 00:16:27.797061 oslogin_cache_refresh[1495]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:16:27.797117 oslogin_cache_refresh[1495]: Refreshing group entry cache Jul 7 00:16:27.797911 oslogin_cache_refresh[1495]: Failure getting groups, quitting Jul 7 00:16:27.797928 oslogin_cache_refresh[1495]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:16:27.801605 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 7 00:16:27.804166 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:16:27.821064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 00:16:27.819062 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:16:27.840533 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:16:27.834695 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:16:27.861926 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:16:27.872512 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:16:27.880785 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 7 00:16:27.880888 extend-filesystems[1494]: Found /dev/sda6 Jul 7 00:16:27.905157 coreos-metadata[1488]: Jul 07 00:16:27.886 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jul 7 00:16:27.905157 coreos-metadata[1488]: Jul 07 00:16:27.886 INFO Failed to fetch: error sending request for url (http://169.254.169.254/computeMetadata/v1/instance/hostname) Jul 7 00:16:27.884201 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:16:27.884856 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:16:27.885211 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:16:27.897318 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:16:27.918337 extend-filesystems[1494]: Found /dev/sda9 Jul 7 00:16:27.898527 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:16:27.922397 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:16:27.925803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:16:27.964290 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:16:27.967096 extend-filesystems[1494]: Checking size of /dev/sda9 Jul 7 00:16:28.006748 jq[1513]: true Jul 7 00:16:28.029154 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 00:16:28.057605 update_engine[1512]: I20250707 00:16:28.057020 1512 main.cc:92] Flatcar Update Engine starting Jul 7 00:16:28.070817 extend-filesystems[1494]: Resized partition /dev/sda9 Jul 7 00:16:28.097657 extend-filesystems[1550]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:16:28.109856 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 00:16:28.117191 jq[1541]: true Jul 7 00:16:28.121186 systemd-networkd[1483]: lo: Link UP Jul 7 00:16:28.125658 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:16:28.124294 systemd-networkd[1483]: lo: Gained carrier Jul 7 00:16:28.146657 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 7 00:16:28.152928 systemd-networkd[1483]: Enumeration completed Jul 7 00:16:28.153126 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:16:28.153962 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:16:28.153970 systemd-networkd[1483]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:16:28.160481 systemd-networkd[1483]: eth0: Link UP Jul 7 00:16:28.160870 systemd-networkd[1483]: eth0: Gained carrier Jul 7 00:16:28.160909 systemd-networkd[1483]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:16:28.170869 systemd[1]: Reached target network.target - Network. Jul 7 00:16:28.186819 systemd-networkd[1483]: eth0: DHCPv4 address 10.128.0.74/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 7 00:16:28.190104 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:16:28.200678 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 7 00:16:28.219741 extend-filesystems[1550]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 7 00:16:28.219741 extend-filesystems[1550]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 7 00:16:28.219741 extend-filesystems[1550]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 7 00:16:28.248402 extend-filesystems[1494]: Resized filesystem in /dev/sda9 Jul 7 00:16:28.257010 tar[1519]: linux-amd64/LICENSE Jul 7 00:16:28.257010 tar[1519]: linux-amd64/helm Jul 7 00:16:28.251585 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:16:28.278588 ntpd[1497]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: ---------------------------------------------------- Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: corporation. Support and training for ntp-4 are Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: available at https://www.nwtime.org/support Jul 7 00:16:28.291605 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: ---------------------------------------------------- Jul 7 00:16:28.285552 ntpd[1497]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:16:28.293180 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:16:28.302996 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: proto: precision = 0.081 usec (-23) Jul 7 00:16:28.302996 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: basedate set to 2025-06-24 Jul 7 00:16:28.302996 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: gps base set to 2025-06-29 (week 2373) Jul 7 00:16:28.285605 ntpd[1497]: ---------------------------------------------------- Jul 7 00:16:28.285624 ntpd[1497]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:16:28.285978 ntpd[1497]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:16:28.285997 ntpd[1497]: corporation. Support and training for ntp-4 are Jul 7 00:16:28.286013 ntpd[1497]: available at https://www.nwtime.org/support Jul 7 00:16:28.286030 ntpd[1497]: ---------------------------------------------------- Jul 7 00:16:28.305449 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:16:28.293852 ntpd[1497]: proto: precision = 0.081 usec (-23) Jul 7 00:16:28.306922 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:16:28.299438 ntpd[1497]: basedate set to 2025-06-24 Jul 7 00:16:28.299467 ntpd[1497]: gps base set to 2025-06-29 (week 2373) Jul 7 00:16:28.316392 ntpd[1497]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:16:28.322086 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:16:28.322086 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:16:28.316474 ntpd[1497]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:16:28.322843 ntpd[1497]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Listen normally on 3 eth0 10.128.0.74:123 Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Listen normally on 4 lo [::1]:123 Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: bind(21) AF_INET6 fe80::4001:aff:fe80:4a%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:4a%2#123 Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: failed to init interface for address fe80::4001:aff:fe80:4a%2 Jul 7 00:16:28.325823 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: Listening on routing socket on fd #21 for interface updates Jul 7 00:16:28.323568 ntpd[1497]: Listen normally on 3 eth0 10.128.0.74:123 Jul 7 00:16:28.324936 ntpd[1497]: Listen normally on 4 lo [::1]:123 Jul 7 00:16:28.325044 ntpd[1497]: bind(21) AF_INET6 fe80::4001:aff:fe80:4a%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:16:28.325079 ntpd[1497]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:4a%2#123 Jul 7 00:16:28.325105 ntpd[1497]: failed to init interface for address fe80::4001:aff:fe80:4a%2 Jul 7 00:16:28.325161 ntpd[1497]: Listening on routing socket on fd #21 for interface updates Jul 7 00:16:28.361217 ntpd[1497]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:16:28.364732 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:16:28.364732 ntpd[1497]: 7 Jul 00:16:28 ntpd[1497]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:16:28.361274 ntpd[1497]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:16:28.399829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:16:28.405711 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:16:28.409775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:16:28.410775 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:16:28.440972 systemd[1]: Starting sshkeys.service... Jul 7 00:16:28.479198 dbus-daemon[1489]: [system] SELinux support is enabled Jul 7 00:16:28.486021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jul 7 00:16:28.496935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:16:28.513937 dbus-daemon[1489]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1483 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 00:16:28.517659 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:16:28.517715 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:16:28.531024 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:16:28.534980 dbus-daemon[1489]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:16:28.539891 update_engine[1512]: I20250707 00:16:28.537381 1512 update_check_scheduler.cc:74] Next update check in 2m57s Jul 7 00:16:28.541813 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:16:28.542025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:16:28.561621 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:16:28.582950 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:16:28.612114 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:16:28.630830 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 00:16:28.643160 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:16:28.706710 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:16:28.761075 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:16:28.805665 coreos-metadata[1589]: Jul 07 00:16:28.804 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 7 00:16:28.828188 coreos-metadata[1589]: Jul 07 00:16:28.827 INFO Fetch failed with 404: resource not found Jul 7 00:16:28.828188 coreos-metadata[1589]: Jul 07 00:16:28.827 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 7 00:16:28.829570 coreos-metadata[1589]: Jul 07 00:16:28.828 INFO Fetch successful Jul 7 00:16:28.829570 coreos-metadata[1589]: Jul 07 00:16:28.828 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 7 00:16:28.829570 coreos-metadata[1589]: Jul 07 00:16:28.829 INFO Fetch failed with 404: resource not found Jul 7 00:16:28.829570 coreos-metadata[1589]: Jul 07 00:16:28.829 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 7 00:16:28.844299 coreos-metadata[1589]: Jul 07 00:16:28.844 INFO Fetch failed with 404: resource not found Jul 7 00:16:28.844299 coreos-metadata[1589]: Jul 07 00:16:28.844 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 7 00:16:28.851679 coreos-metadata[1589]: Jul 07 00:16:28.848 INFO Fetch successful Jul 7 00:16:28.859789 unknown[1589]: wrote ssh authorized keys file for user: core Jul 7 00:16:28.887592 coreos-metadata[1488]: Jul 07 00:16:28.887 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #2 Jul 7 00:16:28.895172 coreos-metadata[1488]: Jul 07 00:16:28.894 INFO Fetch successful Jul 7 00:16:28.895172 coreos-metadata[1488]: Jul 07 00:16:28.894 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jul 7 00:16:28.897991 coreos-metadata[1488]: Jul 07 00:16:28.897 INFO Fetch successful Jul 7 00:16:28.897991 coreos-metadata[1488]: Jul 07 00:16:28.897 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jul 7 00:16:28.904676 coreos-metadata[1488]: Jul 07 00:16:28.901 INFO Fetch successful Jul 7 00:16:28.904676 coreos-metadata[1488]: Jul 07 00:16:28.901 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jul 7 00:16:28.904676 coreos-metadata[1488]: Jul 07 00:16:28.903 INFO Fetch successful Jul 7 00:16:28.994748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:16:28.997065 update-ssh-keys[1599]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:16:29.008047 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:16:29.029500 systemd[1]: Finished sshkeys.service. Jul 7 00:16:29.066749 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:16:29.076203 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:16:29.142199 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:16:29.288222 ntpd[1497]: bind(24) AF_INET6 fe80::4001:aff:fe80:4a%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:16:29.288878 ntpd[1497]: 7 Jul 00:16:29 ntpd[1497]: bind(24) AF_INET6 fe80::4001:aff:fe80:4a%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:16:29.288878 ntpd[1497]: 7 Jul 00:16:29 ntpd[1497]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:4a%2#123 Jul 7 00:16:29.288878 ntpd[1497]: 7 Jul 00:16:29 ntpd[1497]: failed to init interface for address fe80::4001:aff:fe80:4a%2 Jul 7 00:16:29.288274 ntpd[1497]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:4a%2#123 Jul 7 00:16:29.288295 ntpd[1497]: failed to init interface for address fe80::4001:aff:fe80:4a%2 Jul 7 00:16:29.416811 systemd-logind[1511]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:16:29.416860 systemd-logind[1511]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 00:16:29.416892 systemd-logind[1511]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:16:29.417397 systemd-logind[1511]: New seat seat0. Jul 7 00:16:29.420773 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:16:29.533668 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:16:29.546886 containerd[1580]: time="2025-07-07T00:16:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:16:29.550898 containerd[1580]: time="2025-07-07T00:16:29.550292844Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:16:29.583833 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 00:16:29.585347 dbus-daemon[1489]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 00:16:29.588915 dbus-daemon[1489]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1590 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 00:16:29.604392 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.613657619Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.377µs" Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.613711104Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.613742536Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.613966629Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.613990984Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614031885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614127838Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614146239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614507994Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614535558Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614555124Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:16:29.614863 containerd[1580]: time="2025-07-07T00:16:29.614578619Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:16:29.615438 containerd[1580]: time="2025-07-07T00:16:29.615081184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:16:29.615438 containerd[1580]: time="2025-07-07T00:16:29.615410657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:16:29.615528 containerd[1580]: time="2025-07-07T00:16:29.615460582Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:16:29.615528 containerd[1580]: time="2025-07-07T00:16:29.615482124Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:16:29.615618 containerd[1580]: time="2025-07-07T00:16:29.615552610Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:16:29.616029 containerd[1580]: time="2025-07-07T00:16:29.615997578Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:16:29.616167 containerd[1580]: time="2025-07-07T00:16:29.616105144Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624664045Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624744167Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624770448Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624790689Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624846309Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624866703Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624888685Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624907750Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624925880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624966230Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.624982588Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:16:29.625084 containerd[1580]: time="2025-07-07T00:16:29.625002217Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625167693Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625196440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625222585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625263426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625287055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625305685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625336370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625353597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625373181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625391776Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625417903Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625509088Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625530480Z" level=info msg="Start snapshots syncer" Jul 7 00:16:29.625656 containerd[1580]: time="2025-07-07T00:16:29.625564489Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:16:29.626267 containerd[1580]: time="2025-07-07T00:16:29.625986673Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:16:29.626267 containerd[1580]: time="2025-07-07T00:16:29.626076184Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627419919Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627596115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627665061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627689976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627711799Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627735579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627755505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627776740Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627833922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627856269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627883838Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627926784Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627950884Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:16:29.632047 containerd[1580]: time="2025-07-07T00:16:29.627966404Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.627982377Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.627995835Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628012161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628030763Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628055825Z" level=info msg="runtime interface created" Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628065135Z" level=info msg="created NRI interface" Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628082130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628102893Z" level=info msg="Connect containerd service" Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.628142621Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:16:29.635825 containerd[1580]: time="2025-07-07T00:16:29.633427875Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:16:29.649102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:16:29.661442 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:16:29.706503 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:16:29.707155 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:16:29.718564 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:16:29.780839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:16:29.794818 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:16:29.808542 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:16:29.819138 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:16:29.912968 polkitd[1626]: Started polkitd version 126 Jul 7 00:16:29.923270 polkitd[1626]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 00:16:29.924057 polkitd[1626]: Loading rules from directory /run/polkit-1/rules.d Jul 7 00:16:29.924139 polkitd[1626]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:16:29.926345 polkitd[1626]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 00:16:29.926409 polkitd[1626]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:16:29.926469 polkitd[1626]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 00:16:29.927788 polkitd[1626]: Finished loading, compiling and executing 2 rules Jul 7 00:16:29.928202 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 00:16:29.929323 dbus-daemon[1489]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 00:16:29.930639 polkitd[1626]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 00:16:29.954863 systemd-networkd[1483]: eth0: Gained IPv6LL Jul 7 00:16:29.960441 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:16:29.971613 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:16:29.979573 containerd[1580]: time="2025-07-07T00:16:29.979522918Z" level=info msg="Start subscribing containerd event" Jul 7 00:16:29.979876 containerd[1580]: time="2025-07-07T00:16:29.979823583Z" level=info msg="Start recovering state" Jul 7 00:16:29.980076 containerd[1580]: time="2025-07-07T00:16:29.980059063Z" level=info msg="Start event monitor" Jul 7 00:16:29.980181 containerd[1580]: time="2025-07-07T00:16:29.980163018Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:16:29.980270 containerd[1580]: time="2025-07-07T00:16:29.980255349Z" level=info msg="Start streaming server" Jul 7 00:16:29.980368 containerd[1580]: time="2025-07-07T00:16:29.980341526Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:16:29.981702 containerd[1580]: time="2025-07-07T00:16:29.980434764Z" level=info msg="runtime interface starting up..." Jul 7 00:16:29.981702 containerd[1580]: time="2025-07-07T00:16:29.980456409Z" level=info msg="starting plugins..." Jul 7 00:16:29.981702 containerd[1580]: time="2025-07-07T00:16:29.980480705Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:16:29.984179 containerd[1580]: time="2025-07-07T00:16:29.984118873Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:16:29.985012 containerd[1580]: time="2025-07-07T00:16:29.984980481Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:16:29.985454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:16:29.986695 containerd[1580]: time="2025-07-07T00:16:29.986658841Z" level=info msg="containerd successfully booted in 0.443656s" Jul 7 00:16:29.998763 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:16:30.016029 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jul 7 00:16:30.024541 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:16:30.049452 systemd-hostnamed[1590]: Hostname set to (transient) Jul 7 00:16:30.052114 systemd-resolved[1386]: System hostname changed to 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal'. Jul 7 00:16:30.056415 init.sh[1660]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 7 00:16:30.056415 init.sh[1660]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 7 00:16:30.059417 init.sh[1660]: + /usr/bin/google_instance_setup Jul 7 00:16:30.079858 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:16:30.093426 systemd[1]: Started sshd@0-10.128.0.74:22-139.178.68.195:40934.service - OpenSSH per-connection server daemon (139.178.68.195:40934). Jul 7 00:16:30.127251 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:16:30.346619 tar[1519]: linux-amd64/README.md Jul 7 00:16:30.375323 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:16:30.528110 sshd[1669]: Accepted publickey for core from 139.178.68.195 port 40934 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:30.533494 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:30.549566 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:16:30.562809 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:16:30.593726 systemd-logind[1511]: New session 1 of user core. Jul 7 00:16:30.613029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:16:30.631813 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:16:30.674937 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:16:30.682369 systemd-logind[1511]: New session c1 of user core. Jul 7 00:16:30.791730 instance-setup[1664]: INFO Running google_set_multiqueue. Jul 7 00:16:30.826837 instance-setup[1664]: INFO Set channels for eth0 to 2. Jul 7 00:16:30.833452 instance-setup[1664]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jul 7 00:16:30.835644 instance-setup[1664]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jul 7 00:16:30.836162 instance-setup[1664]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jul 7 00:16:30.838174 instance-setup[1664]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jul 7 00:16:30.838817 instance-setup[1664]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jul 7 00:16:30.841715 instance-setup[1664]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jul 7 00:16:30.841781 instance-setup[1664]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jul 7 00:16:30.845373 instance-setup[1664]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jul 7 00:16:30.862978 instance-setup[1664]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 7 00:16:30.869165 instance-setup[1664]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jul 7 00:16:30.871703 instance-setup[1664]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 7 00:16:30.871752 instance-setup[1664]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 7 00:16:30.908721 init.sh[1660]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 7 00:16:31.069509 systemd[1682]: Queued start job for default target default.target. Jul 7 00:16:31.075984 systemd[1682]: Created slice app.slice - User Application Slice. Jul 7 00:16:31.076045 systemd[1682]: Reached target paths.target - Paths. Jul 7 00:16:31.076124 systemd[1682]: Reached target timers.target - Timers. Jul 7 00:16:31.080761 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:16:31.104921 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:16:31.106871 systemd[1682]: Reached target sockets.target - Sockets. Jul 7 00:16:31.106962 systemd[1682]: Reached target basic.target - Basic System. Jul 7 00:16:31.107034 systemd[1682]: Reached target default.target - Main User Target. Jul 7 00:16:31.107088 systemd[1682]: Startup finished in 405ms. Jul 7 00:16:31.107390 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:16:31.110610 startup-script[1716]: INFO Starting startup scripts. Jul 7 00:16:31.116331 startup-script[1716]: INFO No startup scripts found in metadata. Jul 7 00:16:31.116409 startup-script[1716]: INFO Finished running startup scripts. Jul 7 00:16:31.125071 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:16:31.147382 init.sh[1660]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 7 00:16:31.147382 init.sh[1660]: + daemon_pids=() Jul 7 00:16:31.147382 init.sh[1660]: + for d in accounts clock_skew network Jul 7 00:16:31.147382 init.sh[1660]: + daemon_pids+=($!) Jul 7 00:16:31.147382 init.sh[1660]: + for d in accounts clock_skew network Jul 7 00:16:31.148123 init.sh[1722]: + /usr/bin/google_accounts_daemon Jul 7 00:16:31.148452 init.sh[1723]: + /usr/bin/google_clock_skew_daemon Jul 7 00:16:31.148792 init.sh[1660]: + daemon_pids+=($!) Jul 7 00:16:31.148792 init.sh[1660]: + for d in accounts clock_skew network Jul 7 00:16:31.148792 init.sh[1660]: + daemon_pids+=($!) Jul 7 00:16:31.148792 init.sh[1660]: + NOTIFY_SOCKET=/run/systemd/notify Jul 7 00:16:31.148792 init.sh[1660]: + /usr/bin/systemd-notify --ready Jul 7 00:16:31.148999 init.sh[1724]: + /usr/bin/google_network_daemon Jul 7 00:16:31.163649 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jul 7 00:16:31.176983 init.sh[1660]: + wait -n 1722 1723 1724 Jul 7 00:16:31.386050 systemd[1]: Started sshd@1-10.128.0.74:22-139.178.68.195:40938.service - OpenSSH per-connection server daemon (139.178.68.195:40938). Jul 7 00:16:31.641132 google-clock-skew[1723]: INFO Starting Google Clock Skew daemon. Jul 7 00:16:31.660595 google-clock-skew[1723]: INFO Clock drift token has changed: 0. Jul 7 00:16:31.668618 google-networking[1724]: INFO Starting Google Networking daemon. Jul 7 00:16:31.720913 groupadd[1738]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 7 00:16:31.725882 groupadd[1738]: group added to /etc/gshadow: name=google-sudoers Jul 7 00:16:31.752096 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 40938 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:31.754483 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:31.765200 systemd-logind[1511]: New session 2 of user core. Jul 7 00:16:31.770830 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:16:31.779948 groupadd[1738]: new group: name=google-sudoers, GID=1000 Jul 7 00:16:31.813987 google-accounts[1722]: INFO Starting Google Accounts daemon. Jul 7 00:16:31.826519 google-accounts[1722]: WARNING OS Login not installed. Jul 7 00:16:31.828481 google-accounts[1722]: INFO Creating a new user account for 0. Jul 7 00:16:31.834502 init.sh[1747]: useradd: invalid user name '0': use --badname to ignore Jul 7 00:16:31.835009 google-accounts[1722]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 7 00:16:31.967617 sshd[1744]: Connection closed by 139.178.68.195 port 40938 Jul 7 00:16:31.969469 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:31.975865 systemd[1]: sshd@1-10.128.0.74:22-139.178.68.195:40938.service: Deactivated successfully. Jul 7 00:16:31.978751 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:16:31.983005 systemd-logind[1511]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:16:31.985024 systemd-logind[1511]: Removed session 2. Jul 7 00:16:32.000322 systemd-resolved[1386]: Clock change detected. Flushing caches. Jul 7 00:16:32.001758 google-clock-skew[1723]: INFO Synced system time with hardware clock. Jul 7 00:16:32.032550 systemd[1]: Started sshd@2-10.128.0.74:22-139.178.68.195:40948.service - OpenSSH per-connection server daemon (139.178.68.195:40948). Jul 7 00:16:32.158740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:16:32.170522 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:16:32.175874 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:16:32.179664 systemd[1]: Startup finished in 4.202s (kernel) + 7.471s (initrd) + 8.925s (userspace) = 20.599s. Jul 7 00:16:32.292151 ntpd[1497]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:4a%2]:123 Jul 7 00:16:32.293124 ntpd[1497]: 7 Jul 00:16:32 ntpd[1497]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:4a%2]:123 Jul 7 00:16:32.352445 sshd[1754]: Accepted publickey for core from 139.178.68.195 port 40948 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:32.354579 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:32.362608 systemd-logind[1511]: New session 3 of user core. Jul 7 00:16:32.369515 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:16:32.568552 sshd[1769]: Connection closed by 139.178.68.195 port 40948 Jul 7 00:16:32.569807 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:32.577118 systemd-logind[1511]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:16:32.578077 systemd[1]: sshd@2-10.128.0.74:22-139.178.68.195:40948.service: Deactivated successfully. Jul 7 00:16:32.580856 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:16:32.584437 systemd-logind[1511]: Removed session 3. Jul 7 00:16:33.098329 kubelet[1761]: E0707 00:16:33.098236 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:16:33.101276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:16:33.101578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:16:33.102198 systemd[1]: kubelet.service: Consumed 1.310s CPU time, 266.5M memory peak. Jul 7 00:16:42.629789 systemd[1]: Started sshd@3-10.128.0.74:22-139.178.68.195:58982.service - OpenSSH per-connection server daemon (139.178.68.195:58982). Jul 7 00:16:42.933146 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 58982 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:42.935089 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:42.942324 systemd-logind[1511]: New session 4 of user core. Jul 7 00:16:42.949506 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:16:43.107200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:16:43.109889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:16:43.149094 sshd[1780]: Connection closed by 139.178.68.195 port 58982 Jul 7 00:16:43.149942 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:43.158600 systemd[1]: sshd@3-10.128.0.74:22-139.178.68.195:58982.service: Deactivated successfully. Jul 7 00:16:43.161971 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:16:43.164388 systemd-logind[1511]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:16:43.166093 systemd-logind[1511]: Removed session 4. Jul 7 00:16:43.207424 systemd[1]: Started sshd@4-10.128.0.74:22-139.178.68.195:58986.service - OpenSSH per-connection server daemon (139.178.68.195:58986). Jul 7 00:16:43.478432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:16:43.493957 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:16:43.525224 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 58986 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:43.528047 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:43.540260 systemd-logind[1511]: New session 5 of user core. Jul 7 00:16:43.543505 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:16:43.561628 kubelet[1796]: E0707 00:16:43.561558 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:16:43.566311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:16:43.566565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:16:43.567414 systemd[1]: kubelet.service: Consumed 220ms CPU time, 108.9M memory peak. Jul 7 00:16:43.736902 sshd[1802]: Connection closed by 139.178.68.195 port 58986 Jul 7 00:16:43.738336 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:43.743880 systemd[1]: sshd@4-10.128.0.74:22-139.178.68.195:58986.service: Deactivated successfully. Jul 7 00:16:43.746447 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:16:43.747565 systemd-logind[1511]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:16:43.749569 systemd-logind[1511]: Removed session 5. Jul 7 00:16:43.794764 systemd[1]: Started sshd@5-10.128.0.74:22-139.178.68.195:59002.service - OpenSSH per-connection server daemon (139.178.68.195:59002). Jul 7 00:16:44.105189 sshd[1809]: Accepted publickey for core from 139.178.68.195 port 59002 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:44.107057 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:44.114806 systemd-logind[1511]: New session 6 of user core. Jul 7 00:16:44.122547 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:16:44.320757 sshd[1811]: Connection closed by 139.178.68.195 port 59002 Jul 7 00:16:44.322636 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:44.328199 systemd[1]: sshd@5-10.128.0.74:22-139.178.68.195:59002.service: Deactivated successfully. Jul 7 00:16:44.330516 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:16:44.331925 systemd-logind[1511]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:16:44.334083 systemd-logind[1511]: Removed session 6. Jul 7 00:16:44.375855 systemd[1]: Started sshd@6-10.128.0.74:22-139.178.68.195:59006.service - OpenSSH per-connection server daemon (139.178.68.195:59006). Jul 7 00:16:44.693988 sshd[1817]: Accepted publickey for core from 139.178.68.195 port 59006 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:44.695547 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:44.703080 systemd-logind[1511]: New session 7 of user core. Jul 7 00:16:44.712556 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:16:44.887633 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:16:44.888129 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:16:44.906175 sudo[1820]: pam_unix(sudo:session): session closed for user root Jul 7 00:16:44.949182 sshd[1819]: Connection closed by 139.178.68.195 port 59006 Jul 7 00:16:44.950713 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:44.956061 systemd[1]: sshd@6-10.128.0.74:22-139.178.68.195:59006.service: Deactivated successfully. Jul 7 00:16:44.958572 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:16:44.961610 systemd-logind[1511]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:16:44.963640 systemd-logind[1511]: Removed session 7. Jul 7 00:16:45.005049 systemd[1]: Started sshd@7-10.128.0.74:22-139.178.68.195:59020.service - OpenSSH per-connection server daemon (139.178.68.195:59020). Jul 7 00:16:45.318847 sshd[1826]: Accepted publickey for core from 139.178.68.195 port 59020 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:45.320728 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:45.328219 systemd-logind[1511]: New session 8 of user core. Jul 7 00:16:45.334528 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:16:45.500536 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:16:45.501029 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:16:45.507689 sudo[1830]: pam_unix(sudo:session): session closed for user root Jul 7 00:16:45.521677 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:16:45.522161 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:16:45.535511 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:16:45.592722 augenrules[1852]: No rules Jul 7 00:16:45.593145 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:16:45.593506 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:16:45.595736 sudo[1829]: pam_unix(sudo:session): session closed for user root Jul 7 00:16:45.639624 sshd[1828]: Connection closed by 139.178.68.195 port 59020 Jul 7 00:16:45.640530 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Jul 7 00:16:45.646398 systemd[1]: sshd@7-10.128.0.74:22-139.178.68.195:59020.service: Deactivated successfully. Jul 7 00:16:45.648826 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:16:45.650142 systemd-logind[1511]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:16:45.652412 systemd-logind[1511]: Removed session 8. Jul 7 00:16:45.696963 systemd[1]: Started sshd@8-10.128.0.74:22-139.178.68.195:59036.service - OpenSSH per-connection server daemon (139.178.68.195:59036). Jul 7 00:16:46.002435 sshd[1861]: Accepted publickey for core from 139.178.68.195 port 59036 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:16:46.004340 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:16:46.012307 systemd-logind[1511]: New session 9 of user core. Jul 7 00:16:46.019485 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:16:46.181081 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:16:46.181576 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:16:46.698524 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:16:46.714982 (dockerd)[1882]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:16:47.050440 dockerd[1882]: time="2025-07-07T00:16:47.049317855Z" level=info msg="Starting up" Jul 7 00:16:47.053169 dockerd[1882]: time="2025-07-07T00:16:47.053118538Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:16:47.128595 dockerd[1882]: time="2025-07-07T00:16:47.128504278Z" level=info msg="Loading containers: start." Jul 7 00:16:47.148284 kernel: Initializing XFRM netlink socket Jul 7 00:16:47.491616 systemd-networkd[1483]: docker0: Link UP Jul 7 00:16:47.498999 dockerd[1882]: time="2025-07-07T00:16:47.497467420Z" level=info msg="Loading containers: done." Jul 7 00:16:47.525040 dockerd[1882]: time="2025-07-07T00:16:47.524838124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:16:47.525040 dockerd[1882]: time="2025-07-07T00:16:47.524974674Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:16:47.526282 dockerd[1882]: time="2025-07-07T00:16:47.525597901Z" level=info msg="Initializing buildkit" Jul 7 00:16:47.527160 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3633199465-merged.mount: Deactivated successfully. Jul 7 00:16:47.559022 dockerd[1882]: time="2025-07-07T00:16:47.558947122Z" level=info msg="Completed buildkit initialization" Jul 7 00:16:47.568593 dockerd[1882]: time="2025-07-07T00:16:47.568498205Z" level=info msg="Daemon has completed initialization" Jul 7 00:16:47.569317 dockerd[1882]: time="2025-07-07T00:16:47.568799143Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:16:47.568841 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:16:48.489118 containerd[1580]: time="2025-07-07T00:16:48.489060544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 00:16:49.016747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987033923.mount: Deactivated successfully. Jul 7 00:16:50.549537 containerd[1580]: time="2025-07-07T00:16:50.549453307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:50.551052 containerd[1580]: time="2025-07-07T00:16:50.550991769Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28805673" Jul 7 00:16:50.552298 containerd[1580]: time="2025-07-07T00:16:50.552156496Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:50.555419 containerd[1580]: time="2025-07-07T00:16:50.555343995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:50.557277 containerd[1580]: time="2025-07-07T00:16:50.556696273Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.067589099s" Jul 7 00:16:50.557277 containerd[1580]: time="2025-07-07T00:16:50.556748719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 00:16:50.557452 containerd[1580]: time="2025-07-07T00:16:50.557390635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 00:16:52.075304 containerd[1580]: time="2025-07-07T00:16:52.075200639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:52.076687 containerd[1580]: time="2025-07-07T00:16:52.076629276Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24785846" Jul 7 00:16:52.078007 containerd[1580]: time="2025-07-07T00:16:52.077938632Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:52.081181 containerd[1580]: time="2025-07-07T00:16:52.081093548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:52.082664 containerd[1580]: time="2025-07-07T00:16:52.082475831Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.525048434s" Jul 7 00:16:52.082664 containerd[1580]: time="2025-07-07T00:16:52.082525910Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 00:16:52.083562 containerd[1580]: time="2025-07-07T00:16:52.083521962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 00:16:53.233069 containerd[1580]: time="2025-07-07T00:16:53.232995725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:53.234490 containerd[1580]: time="2025-07-07T00:16:53.234431916Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19178832" Jul 7 00:16:53.235964 containerd[1580]: time="2025-07-07T00:16:53.235883111Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:53.239715 containerd[1580]: time="2025-07-07T00:16:53.239656098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:53.240672 containerd[1580]: time="2025-07-07T00:16:53.240504175Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.156942455s" Jul 7 00:16:53.240672 containerd[1580]: time="2025-07-07T00:16:53.240545881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 00:16:53.241551 containerd[1580]: time="2025-07-07T00:16:53.241514512Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 00:16:53.817085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:16:53.819741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:16:54.247792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:16:54.262261 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:16:54.356553 kubelet[2159]: E0707 00:16:54.356488 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:16:54.361141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:16:54.361421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:16:54.362000 systemd[1]: kubelet.service: Consumed 259ms CPU time, 108.6M memory peak. Jul 7 00:16:54.514925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1026306893.mount: Deactivated successfully. Jul 7 00:16:55.161914 containerd[1580]: time="2025-07-07T00:16:55.161805200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:55.163223 containerd[1580]: time="2025-07-07T00:16:55.163162260Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30897258" Jul 7 00:16:55.164678 containerd[1580]: time="2025-07-07T00:16:55.164606297Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:55.167291 containerd[1580]: time="2025-07-07T00:16:55.167229494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:55.168287 containerd[1580]: time="2025-07-07T00:16:55.168011641Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.926450679s" Jul 7 00:16:55.168287 containerd[1580]: time="2025-07-07T00:16:55.168059111Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 00:16:55.168720 containerd[1580]: time="2025-07-07T00:16:55.168683896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:16:55.618408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503036171.mount: Deactivated successfully. Jul 7 00:16:56.752211 containerd[1580]: time="2025-07-07T00:16:56.752125390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:56.753628 containerd[1580]: time="2025-07-07T00:16:56.753564786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Jul 7 00:16:56.755009 containerd[1580]: time="2025-07-07T00:16:56.754927250Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:56.759270 containerd[1580]: time="2025-07-07T00:16:56.758902292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:56.760751 containerd[1580]: time="2025-07-07T00:16:56.760707446Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.591976748s" Jul 7 00:16:56.760934 containerd[1580]: time="2025-07-07T00:16:56.760909508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:16:56.761846 containerd[1580]: time="2025-07-07T00:16:56.761700284Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:16:57.152543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689313265.mount: Deactivated successfully. Jul 7 00:16:57.157621 containerd[1580]: time="2025-07-07T00:16:57.157560181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:16:57.158543 containerd[1580]: time="2025-07-07T00:16:57.158504629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jul 7 00:16:57.160279 containerd[1580]: time="2025-07-07T00:16:57.160216490Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:16:57.163159 containerd[1580]: time="2025-07-07T00:16:57.163093349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:16:57.164348 containerd[1580]: time="2025-07-07T00:16:57.164023610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 401.98369ms" Jul 7 00:16:57.164348 containerd[1580]: time="2025-07-07T00:16:57.164065338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:16:57.165033 containerd[1580]: time="2025-07-07T00:16:57.164853127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 00:16:57.566466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980497852.mount: Deactivated successfully. Jul 7 00:16:59.906901 containerd[1580]: time="2025-07-07T00:16:59.906827834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:59.908462 containerd[1580]: time="2025-07-07T00:16:59.908367619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557924" Jul 7 00:16:59.909741 containerd[1580]: time="2025-07-07T00:16:59.909661699Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:59.913300 containerd[1580]: time="2025-07-07T00:16:59.913204229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:16:59.915038 containerd[1580]: time="2025-07-07T00:16:59.914786776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.74989261s" Jul 7 00:16:59.915038 containerd[1580]: time="2025-07-07T00:16:59.914852904Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 00:17:00.089757 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 00:17:02.504813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:02.505137 systemd[1]: kubelet.service: Consumed 259ms CPU time, 108.6M memory peak. Jul 7 00:17:02.508570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:17:02.551844 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-9.scope)... Jul 7 00:17:02.551868 systemd[1]: Reloading... Jul 7 00:17:02.736295 zram_generator::config[2354]: No configuration found. Jul 7 00:17:02.884652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:17:03.052145 systemd[1]: Reloading finished in 499 ms. Jul 7 00:17:03.130901 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:17:03.131015 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:17:03.131524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:03.131591 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.3M memory peak. Jul 7 00:17:03.134064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:17:03.434771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:03.444915 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:17:03.505902 kubelet[2405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:17:03.505902 kubelet[2405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:17:03.506507 kubelet[2405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:17:03.506507 kubelet[2405]: I0707 00:17:03.506045 2405 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:17:04.103144 kubelet[2405]: I0707 00:17:04.103064 2405 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:17:04.103144 kubelet[2405]: I0707 00:17:04.103121 2405 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:17:04.103607 kubelet[2405]: I0707 00:17:04.103564 2405 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:17:04.153434 kubelet[2405]: E0707 00:17:04.153365 2405 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:04.154652 kubelet[2405]: I0707 00:17:04.154442 2405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:17:04.171137 kubelet[2405]: I0707 00:17:04.171105 2405 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:17:04.175004 kubelet[2405]: I0707 00:17:04.174972 2405 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:17:04.175400 kubelet[2405]: I0707 00:17:04.175340 2405 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:17:04.175642 kubelet[2405]: I0707 00:17:04.175386 2405 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:17:04.176735 kubelet[2405]: I0707 00:17:04.176688 2405 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:17:04.176735 kubelet[2405]: I0707 00:17:04.176725 2405 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:17:04.176938 kubelet[2405]: I0707 00:17:04.176901 2405 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:17:04.182531 kubelet[2405]: I0707 00:17:04.182489 2405 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:17:04.182531 kubelet[2405]: I0707 00:17:04.182536 2405 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:17:04.182708 kubelet[2405]: I0707 00:17:04.182574 2405 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:17:04.182708 kubelet[2405]: I0707 00:17:04.182601 2405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:17:04.190280 kubelet[2405]: W0707 00:17:04.189710 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Jul 7 00:17:04.190280 kubelet[2405]: E0707 00:17:04.189819 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:04.190854 kubelet[2405]: W0707 00:17:04.190798 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Jul 7 00:17:04.190942 kubelet[2405]: E0707 00:17:04.190873 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:04.191371 kubelet[2405]: I0707 00:17:04.191344 2405 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:17:04.192004 kubelet[2405]: I0707 00:17:04.191979 2405 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:17:04.192115 kubelet[2405]: W0707 00:17:04.192064 2405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:17:04.195622 kubelet[2405]: I0707 00:17:04.195561 2405 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:17:04.195622 kubelet[2405]: I0707 00:17:04.195610 2405 server.go:1287] "Started kubelet" Jul 7 00:17:04.203776 kubelet[2405]: I0707 00:17:04.203103 2405 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:17:04.204605 kubelet[2405]: I0707 00:17:04.204568 2405 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:17:04.206095 kubelet[2405]: I0707 00:17:04.205408 2405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:17:04.206095 kubelet[2405]: I0707 00:17:04.205805 2405 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:17:04.208119 kubelet[2405]: I0707 00:17:04.207810 2405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:17:04.208339 kubelet[2405]: E0707 00:17:04.206038 2405 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal.184fcff35bdc5b90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,UID:ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,},FirstTimestamp:2025-07-07 00:17:04.195582864 +0000 UTC m=+0.745180058,LastTimestamp:2025-07-07 00:17:04.195582864 +0000 UTC m=+0.745180058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,}" Jul 7 00:17:04.208866 kubelet[2405]: I0707 00:17:04.208842 2405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:17:04.216039 kubelet[2405]: E0707 00:17:04.215709 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" Jul 7 00:17:04.216180 kubelet[2405]: I0707 00:17:04.216083 2405 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:17:04.216407 kubelet[2405]: I0707 00:17:04.216383 2405 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:17:04.216504 kubelet[2405]: I0707 00:17:04.216459 2405 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:17:04.217300 kubelet[2405]: W0707 00:17:04.217201 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Jul 7 00:17:04.217508 kubelet[2405]: E0707 00:17:04.217445 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:04.219796 kubelet[2405]: I0707 00:17:04.218520 2405 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:17:04.219796 kubelet[2405]: I0707 00:17:04.218675 2405 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:17:04.220410 kubelet[2405]: E0707 00:17:04.220360 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="200ms" Jul 7 00:17:04.221745 kubelet[2405]: I0707 00:17:04.221722 2405 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:17:04.230076 kubelet[2405]: E0707 00:17:04.230037 2405 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:17:04.263112 kubelet[2405]: I0707 00:17:04.263034 2405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:17:04.264952 kubelet[2405]: I0707 00:17:04.264627 2405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:17:04.264952 kubelet[2405]: I0707 00:17:04.264654 2405 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:17:04.264952 kubelet[2405]: I0707 00:17:04.264676 2405 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:17:04.264952 kubelet[2405]: I0707 00:17:04.264683 2405 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:17:04.264952 kubelet[2405]: E0707 00:17:04.264738 2405 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:17:04.272231 kubelet[2405]: I0707 00:17:04.272196 2405 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:17:04.272231 kubelet[2405]: I0707 00:17:04.272224 2405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:17:04.272588 kubelet[2405]: I0707 00:17:04.272564 2405 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:17:04.272963 kubelet[2405]: W0707 00:17:04.272928 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Jul 7 00:17:04.273063 kubelet[2405]: E0707 00:17:04.272981 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:04.315976 kubelet[2405]: E0707 00:17:04.315911 2405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" Jul 7 00:17:04.323437 kubelet[2405]: I0707 00:17:04.323381 2405 policy_none.go:49] "None policy: Start" Jul 7 00:17:04.323437 kubelet[2405]: I0707 00:17:04.323426 2405 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:17:04.323437 kubelet[2405]: I0707 00:17:04.323449 2405 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:17:04.334889 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:17:04.348282 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:17:04.353640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:17:04.365846 kubelet[2405]: E0707 00:17:04.365785 2405 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:17:04.374762 kubelet[2405]: I0707 00:17:04.374723 2405 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:17:04.375131 kubelet[2405]: I0707 00:17:04.375069 2405 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:17:04.375131 kubelet[2405]: I0707 00:17:04.375097 2405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:17:04.376289 kubelet[2405]: I0707 00:17:04.376215 2405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:17:04.378898 kubelet[2405]: E0707 00:17:04.378824 2405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:17:04.379353 kubelet[2405]: E0707 00:17:04.379318 2405 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" Jul 7 00:17:04.421716 kubelet[2405]: E0707 00:17:04.421658 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="400ms" Jul 7 00:17:04.481273 kubelet[2405]: I0707 00:17:04.481156 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.481699 kubelet[2405]: E0707 00:17:04.481657 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.586206 systemd[1]: Created slice kubepods-burstable-pod8c63bd2f548281cac403084a2ef9b3da.slice - libcontainer container kubepods-burstable-pod8c63bd2f548281cac403084a2ef9b3da.slice. Jul 7 00:17:04.595591 kubelet[2405]: E0707 00:17:04.595432 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.601615 systemd[1]: Created slice kubepods-burstable-podcb0c8f67b656d8dbe194b8654f83baf2.slice - libcontainer container kubepods-burstable-podcb0c8f67b656d8dbe194b8654f83baf2.slice. Jul 7 00:17:04.604954 kubelet[2405]: E0707 00:17:04.604848 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.615295 systemd[1]: Created slice kubepods-burstable-podaeb626248a62e0352f07d4b73a3bde7b.slice - libcontainer container kubepods-burstable-podaeb626248a62e0352f07d4b73a3bde7b.slice. Jul 7 00:17:04.618262 kubelet[2405]: E0707 00:17:04.618208 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619309 kubelet[2405]: I0707 00:17:04.619212 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c63bd2f548281cac403084a2ef9b3da-k8s-certs\") pod \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"8c63bd2f548281cac403084a2ef9b3da\") " pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619424 kubelet[2405]: I0707 00:17:04.619381 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c63bd2f548281cac403084a2ef9b3da-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"8c63bd2f548281cac403084a2ef9b3da\") " pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619493 kubelet[2405]: I0707 00:17:04.619439 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619493 kubelet[2405]: I0707 00:17:04.619474 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619607 kubelet[2405]: I0707 00:17:04.619507 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c63bd2f548281cac403084a2ef9b3da-ca-certs\") pod \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"8c63bd2f548281cac403084a2ef9b3da\") " pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619607 kubelet[2405]: I0707 00:17:04.619537 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-ca-certs\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619705 kubelet[2405]: I0707 00:17:04.619601 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619705 kubelet[2405]: I0707 00:17:04.619644 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.619705 kubelet[2405]: I0707 00:17:04.619681 2405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aeb626248a62e0352f07d4b73a3bde7b-kubeconfig\") pod \"kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"aeb626248a62e0352f07d4b73a3bde7b\") " pod="kube-system/kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.687978 kubelet[2405]: I0707 00:17:04.687807 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.688312 kubelet[2405]: E0707 00:17:04.688278 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:04.823023 kubelet[2405]: E0707 00:17:04.822930 2405 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.74:6443: connect: connection refused" interval="800ms" Jul 7 00:17:04.897412 containerd[1580]: time="2025-07-07T00:17:04.897265053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,Uid:8c63bd2f548281cac403084a2ef9b3da,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:04.907420 containerd[1580]: time="2025-07-07T00:17:04.907295084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,Uid:cb0c8f67b656d8dbe194b8654f83baf2,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:04.922916 containerd[1580]: time="2025-07-07T00:17:04.922619766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,Uid:aeb626248a62e0352f07d4b73a3bde7b,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:04.940980 containerd[1580]: time="2025-07-07T00:17:04.940927608Z" level=info msg="connecting to shim 6fe08adb5c9b13a2902ffc7a94c6c1f373e79c7086e7e0c5a415e412f235b596" address="unix:///run/containerd/s/6584f662664c3e1298ef3841270b380b8421490a68ce7e13843f0fb7a3630d5b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:04.989579 containerd[1580]: time="2025-07-07T00:17:04.989446879Z" level=info msg="connecting to shim d46ae6e15b00a41626a40b7f885c6dc0b2e6e3ff475f3e3d043109a001877884" address="unix:///run/containerd/s/cf91270549fb63eb1cc3f3285e5118d941e5f97050f1b1b5cd856af174c2dce6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:05.000917 containerd[1580]: time="2025-07-07T00:17:05.000562505Z" level=info msg="connecting to shim 4ecc731e303f0f0b640c5d393eb154e7f4b98fc77b5677d34d82f7a7e026d1ec" address="unix:///run/containerd/s/fd6eb086c88cbb57f3fadbf8c14ea72e12b9e16294d806240857d56ba0ba386a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:05.031571 systemd[1]: Started cri-containerd-6fe08adb5c9b13a2902ffc7a94c6c1f373e79c7086e7e0c5a415e412f235b596.scope - libcontainer container 6fe08adb5c9b13a2902ffc7a94c6c1f373e79c7086e7e0c5a415e412f235b596. Jul 7 00:17:05.077724 systemd[1]: Started cri-containerd-4ecc731e303f0f0b640c5d393eb154e7f4b98fc77b5677d34d82f7a7e026d1ec.scope - libcontainer container 4ecc731e303f0f0b640c5d393eb154e7f4b98fc77b5677d34d82f7a7e026d1ec. Jul 7 00:17:05.081141 systemd[1]: Started cri-containerd-d46ae6e15b00a41626a40b7f885c6dc0b2e6e3ff475f3e3d043109a001877884.scope - libcontainer container d46ae6e15b00a41626a40b7f885c6dc0b2e6e3ff475f3e3d043109a001877884. Jul 7 00:17:05.098181 kubelet[2405]: I0707 00:17:05.098019 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:05.098877 kubelet[2405]: E0707 00:17:05.098836 2405 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.74:6443/api/v1/nodes\": dial tcp 10.128.0.74:6443: connect: connection refused" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:05.150399 containerd[1580]: time="2025-07-07T00:17:05.150111020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,Uid:8c63bd2f548281cac403084a2ef9b3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fe08adb5c9b13a2902ffc7a94c6c1f373e79c7086e7e0c5a415e412f235b596\"" Jul 7 00:17:05.153696 kubelet[2405]: E0707 00:17:05.153660 2405 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-21291" Jul 7 00:17:05.155754 containerd[1580]: time="2025-07-07T00:17:05.155714378Z" level=info msg="CreateContainer within sandbox \"6fe08adb5c9b13a2902ffc7a94c6c1f373e79c7086e7e0c5a415e412f235b596\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:17:05.169503 containerd[1580]: time="2025-07-07T00:17:05.169457795Z" level=info msg="Container 3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:05.179440 containerd[1580]: time="2025-07-07T00:17:05.179102257Z" level=info msg="CreateContainer within sandbox \"6fe08adb5c9b13a2902ffc7a94c6c1f373e79c7086e7e0c5a415e412f235b596\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4\"" Jul 7 00:17:05.181269 containerd[1580]: time="2025-07-07T00:17:05.181205845Z" level=info msg="StartContainer for \"3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4\"" Jul 7 00:17:05.182712 containerd[1580]: time="2025-07-07T00:17:05.182653424Z" level=info msg="connecting to shim 3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4" address="unix:///run/containerd/s/6584f662664c3e1298ef3841270b380b8421490a68ce7e13843f0fb7a3630d5b" protocol=ttrpc version=3 Jul 7 00:17:05.205317 kubelet[2405]: W0707 00:17:05.204725 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Jul 7 00:17:05.205317 kubelet[2405]: E0707 00:17:05.204783 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:05.221089 containerd[1580]: time="2025-07-07T00:17:05.221023501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,Uid:aeb626248a62e0352f07d4b73a3bde7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ecc731e303f0f0b640c5d393eb154e7f4b98fc77b5677d34d82f7a7e026d1ec\"" Jul 7 00:17:05.223822 kubelet[2405]: E0707 00:17:05.223761 2405 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-21291" Jul 7 00:17:05.226145 containerd[1580]: time="2025-07-07T00:17:05.226103727Z" level=info msg="CreateContainer within sandbox \"4ecc731e303f0f0b640c5d393eb154e7f4b98fc77b5677d34d82f7a7e026d1ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:17:05.241089 containerd[1580]: time="2025-07-07T00:17:05.241030172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal,Uid:cb0c8f67b656d8dbe194b8654f83baf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d46ae6e15b00a41626a40b7f885c6dc0b2e6e3ff475f3e3d043109a001877884\"" Jul 7 00:17:05.242970 containerd[1580]: time="2025-07-07T00:17:05.242932147Z" level=info msg="Container 3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:05.243549 kubelet[2405]: E0707 00:17:05.243056 2405 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flat" Jul 7 00:17:05.243520 systemd[1]: Started cri-containerd-3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4.scope - libcontainer container 3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4. Jul 7 00:17:05.246283 containerd[1580]: time="2025-07-07T00:17:05.246019851Z" level=info msg="CreateContainer within sandbox \"d46ae6e15b00a41626a40b7f885c6dc0b2e6e3ff475f3e3d043109a001877884\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:17:05.259106 containerd[1580]: time="2025-07-07T00:17:05.259037769Z" level=info msg="CreateContainer within sandbox \"4ecc731e303f0f0b640c5d393eb154e7f4b98fc77b5677d34d82f7a7e026d1ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7\"" Jul 7 00:17:05.260381 containerd[1580]: time="2025-07-07T00:17:05.260274942Z" level=info msg="Container c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:05.262948 containerd[1580]: time="2025-07-07T00:17:05.262915977Z" level=info msg="StartContainer for \"3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7\"" Jul 7 00:17:05.264982 containerd[1580]: time="2025-07-07T00:17:05.264946192Z" level=info msg="connecting to shim 3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7" address="unix:///run/containerd/s/fd6eb086c88cbb57f3fadbf8c14ea72e12b9e16294d806240857d56ba0ba386a" protocol=ttrpc version=3 Jul 7 00:17:05.274628 containerd[1580]: time="2025-07-07T00:17:05.274416458Z" level=info msg="CreateContainer within sandbox \"d46ae6e15b00a41626a40b7f885c6dc0b2e6e3ff475f3e3d043109a001877884\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543\"" Jul 7 00:17:05.276202 containerd[1580]: time="2025-07-07T00:17:05.275908624Z" level=info msg="StartContainer for \"c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543\"" Jul 7 00:17:05.280462 containerd[1580]: time="2025-07-07T00:17:05.280398946Z" level=info msg="connecting to shim c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543" address="unix:///run/containerd/s/cf91270549fb63eb1cc3f3285e5118d941e5f97050f1b1b5cd856af174c2dce6" protocol=ttrpc version=3 Jul 7 00:17:05.307486 systemd[1]: Started cri-containerd-3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7.scope - libcontainer container 3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7. Jul 7 00:17:05.343025 kubelet[2405]: W0707 00:17:05.342898 2405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.74:6443: connect: connection refused Jul 7 00:17:05.343331 kubelet[2405]: E0707 00:17:05.343052 2405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:17:05.362498 systemd[1]: Started cri-containerd-c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543.scope - libcontainer container c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543. Jul 7 00:17:05.394306 containerd[1580]: time="2025-07-07T00:17:05.394224712Z" level=info msg="StartContainer for \"3d725dc835be641759509dc86adb04e2527d181b567b5caa99155e9feebc74e4\" returns successfully" Jul 7 00:17:05.471364 containerd[1580]: time="2025-07-07T00:17:05.470106120Z" level=info msg="StartContainer for \"3d26e13aefce5d87f3532b9919f67895dce91098c8634fb78853be4209a810c7\" returns successfully" Jul 7 00:17:05.515367 containerd[1580]: time="2025-07-07T00:17:05.514307388Z" level=info msg="StartContainer for \"c0f7061838fdfbb4d0a51b8d44a4f9bf21e1b59881a122c8f9c5fd58ac786543\" returns successfully" Jul 7 00:17:05.905275 kubelet[2405]: I0707 00:17:05.903907 2405 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:06.357925 kubelet[2405]: E0707 00:17:06.357563 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:06.361643 kubelet[2405]: E0707 00:17:06.361392 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:06.367664 kubelet[2405]: E0707 00:17:06.367630 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:07.367011 kubelet[2405]: E0707 00:17:07.366957 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:07.369849 kubelet[2405]: E0707 00:17:07.369808 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:07.371012 kubelet[2405]: E0707 00:17:07.370969 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:08.372209 kubelet[2405]: E0707 00:17:08.372153 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:08.373708 kubelet[2405]: E0707 00:17:08.373672 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:09.375021 kubelet[2405]: E0707 00:17:09.374959 2405 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:10.849198 kubelet[2405]: E0707 00:17:10.849137 2405 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:10.976268 kubelet[2405]: I0707 00:17:10.975919 2405 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:10.976268 kubelet[2405]: E0707 00:17:10.975980 2405 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\": node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" Jul 7 00:17:11.020284 kubelet[2405]: I0707 00:17:11.020225 2405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:11.039642 kubelet[2405]: E0707 00:17:11.039590 2405 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:11.039642 kubelet[2405]: I0707 00:17:11.039641 2405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:11.047278 kubelet[2405]: E0707 00:17:11.047220 2405 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:11.047480 kubelet[2405]: I0707 00:17:11.047306 2405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:11.050360 kubelet[2405]: E0707 00:17:11.050318 2405 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:11.189860 kubelet[2405]: I0707 00:17:11.189693 2405 apiserver.go:52] "Watching apiserver" Jul 7 00:17:11.217226 kubelet[2405]: I0707 00:17:11.217180 2405 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:17:12.287669 kubelet[2405]: I0707 00:17:12.287621 2405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:12.295301 kubelet[2405]: W0707 00:17:12.295210 2405 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:17:12.924929 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-9.scope)... Jul 7 00:17:12.924953 systemd[1]: Reloading... Jul 7 00:17:13.079495 zram_generator::config[2718]: No configuration found. Jul 7 00:17:13.214883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:17:13.409437 systemd[1]: Reloading finished in 483 ms. Jul 7 00:17:13.446336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:17:13.464232 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:17:13.464691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:13.464799 systemd[1]: kubelet.service: Consumed 1.331s CPU time, 130.7M memory peak. Jul 7 00:17:13.467925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:17:13.580510 update_engine[1512]: I20250707 00:17:13.580428 1512 update_attempter.cc:509] Updating boot flags... Jul 7 00:17:13.871056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:13.884031 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:17:13.982655 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:17:13.982655 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:17:13.982655 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:17:13.982655 kubelet[2782]: I0707 00:17:13.982474 2782 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:17:14.002112 kubelet[2782]: I0707 00:17:13.999706 2782 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 00:17:14.002112 kubelet[2782]: I0707 00:17:13.999759 2782 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:17:14.002112 kubelet[2782]: I0707 00:17:14.001226 2782 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 00:17:14.010956 kubelet[2782]: I0707 00:17:14.005183 2782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:17:14.018236 kubelet[2782]: I0707 00:17:14.018191 2782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:17:14.027863 kubelet[2782]: I0707 00:17:14.027520 2782 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:17:14.032756 kubelet[2782]: I0707 00:17:14.032689 2782 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:17:14.033792 kubelet[2782]: I0707 00:17:14.033322 2782 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:17:14.033792 kubelet[2782]: I0707 00:17:14.033370 2782 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:17:14.033792 kubelet[2782]: I0707 00:17:14.033652 2782 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:17:14.033792 kubelet[2782]: I0707 00:17:14.033670 2782 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 00:17:14.034138 kubelet[2782]: I0707 00:17:14.033742 2782 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:17:14.034435 kubelet[2782]: I0707 00:17:14.034415 2782 kubelet.go:446] "Attempting to sync node with API server" Jul 7 00:17:14.034583 kubelet[2782]: I0707 00:17:14.034569 2782 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:17:14.034694 kubelet[2782]: I0707 00:17:14.034682 2782 kubelet.go:352] "Adding apiserver pod source" Jul 7 00:17:14.034789 kubelet[2782]: I0707 00:17:14.034776 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:17:14.041264 kubelet[2782]: I0707 00:17:14.041209 2782 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:17:14.044550 kubelet[2782]: I0707 00:17:14.044472 2782 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:17:14.048858 kubelet[2782]: I0707 00:17:14.048830 2782 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:17:14.048966 kubelet[2782]: I0707 00:17:14.048920 2782 server.go:1287] "Started kubelet" Jul 7 00:17:14.051267 kubelet[2782]: I0707 00:17:14.049066 2782 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:17:14.053265 kubelet[2782]: I0707 00:17:14.052162 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:17:14.072986 kubelet[2782]: I0707 00:17:14.072825 2782 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:17:14.072986 kubelet[2782]: I0707 00:17:14.063646 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:17:14.075356 kubelet[2782]: I0707 00:17:14.057655 2782 server.go:479] "Adding debug handlers to kubelet server" Jul 7 00:17:14.090226 kubelet[2782]: I0707 00:17:14.066837 2782 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:17:14.090609 kubelet[2782]: I0707 00:17:14.090591 2782 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:17:14.091047 kubelet[2782]: E0707 00:17:14.091021 2782 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" not found" Jul 7 00:17:14.092365 kubelet[2782]: I0707 00:17:14.092337 2782 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:17:14.092567 kubelet[2782]: I0707 00:17:14.092549 2782 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:17:14.120071 kubelet[2782]: E0707 00:17:14.120004 2782 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:17:14.123321 kubelet[2782]: I0707 00:17:14.123201 2782 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:17:14.123321 kubelet[2782]: I0707 00:17:14.123228 2782 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:17:14.123535 kubelet[2782]: I0707 00:17:14.123368 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:17:14.141710 kubelet[2782]: I0707 00:17:14.141549 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:17:14.143541 kubelet[2782]: I0707 00:17:14.143512 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:17:14.143712 kubelet[2782]: I0707 00:17:14.143699 2782 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 00:17:14.143813 kubelet[2782]: I0707 00:17:14.143801 2782 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:17:14.143886 kubelet[2782]: I0707 00:17:14.143876 2782 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 00:17:14.144110 kubelet[2782]: E0707 00:17:14.144041 2782 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:17:14.222448 kubelet[2782]: I0707 00:17:14.222380 2782 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:17:14.222448 kubelet[2782]: I0707 00:17:14.222408 2782 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:17:14.222448 kubelet[2782]: I0707 00:17:14.222435 2782 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:17:14.222941 kubelet[2782]: I0707 00:17:14.222742 2782 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:17:14.222941 kubelet[2782]: I0707 00:17:14.222774 2782 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:17:14.222941 kubelet[2782]: I0707 00:17:14.222804 2782 policy_none.go:49] "None policy: Start" Jul 7 00:17:14.222941 kubelet[2782]: I0707 00:17:14.222819 2782 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:17:14.222941 kubelet[2782]: I0707 00:17:14.222835 2782 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:17:14.224784 kubelet[2782]: I0707 00:17:14.223065 2782 state_mem.go:75] "Updated machine memory state" Jul 7 00:17:14.238449 kubelet[2782]: I0707 00:17:14.236962 2782 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:17:14.238607 kubelet[2782]: I0707 00:17:14.238490 2782 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:17:14.238901 kubelet[2782]: I0707 00:17:14.238843 2782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:17:14.244674 kubelet[2782]: I0707 00:17:14.242932 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:17:14.245847 kubelet[2782]: I0707 00:17:14.245817 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.247636 kubelet[2782]: I0707 00:17:14.247470 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.248437 kubelet[2782]: I0707 00:17:14.248394 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.250797 kubelet[2782]: E0707 00:17:14.250765 2782 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:17:14.261402 kubelet[2782]: W0707 00:17:14.260818 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:17:14.265641 kubelet[2782]: W0707 00:17:14.265522 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:17:14.274785 kubelet[2782]: W0707 00:17:14.274609 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:17:14.274785 kubelet[2782]: E0707 00:17:14.274690 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.363719 kubelet[2782]: I0707 00:17:14.363221 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.376421 kubelet[2782]: I0707 00:17:14.376173 2782 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.376585 kubelet[2782]: I0707 00:17:14.376474 2782 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395518 kubelet[2782]: I0707 00:17:14.395069 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c63bd2f548281cac403084a2ef9b3da-ca-certs\") pod \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"8c63bd2f548281cac403084a2ef9b3da\") " pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395518 kubelet[2782]: I0707 00:17:14.395135 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c63bd2f548281cac403084a2ef9b3da-k8s-certs\") pod \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"8c63bd2f548281cac403084a2ef9b3da\") " pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395518 kubelet[2782]: I0707 00:17:14.395172 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395518 kubelet[2782]: I0707 00:17:14.395205 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aeb626248a62e0352f07d4b73a3bde7b-kubeconfig\") pod \"kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"aeb626248a62e0352f07d4b73a3bde7b\") " pod="kube-system/kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395864 kubelet[2782]: I0707 00:17:14.395271 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c63bd2f548281cac403084a2ef9b3da-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"8c63bd2f548281cac403084a2ef9b3da\") " pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395864 kubelet[2782]: I0707 00:17:14.395303 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-ca-certs\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395864 kubelet[2782]: I0707 00:17:14.395344 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.395864 kubelet[2782]: I0707 00:17:14.395375 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:14.396070 kubelet[2782]: I0707 00:17:14.395407 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb0c8f67b656d8dbe194b8654f83baf2-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" (UID: \"cb0c8f67b656d8dbe194b8654f83baf2\") " pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:15.044993 kubelet[2782]: I0707 00:17:15.043848 2782 apiserver.go:52] "Watching apiserver" Jul 7 00:17:15.092733 kubelet[2782]: I0707 00:17:15.092496 2782 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:17:15.193443 kubelet[2782]: I0707 00:17:15.193408 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:15.205322 kubelet[2782]: W0707 00:17:15.205272 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 7 00:17:15.207412 kubelet[2782]: E0707 00:17:15.207381 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:15.243875 kubelet[2782]: I0707 00:17:15.243464 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" podStartSLOduration=1.24343937 podStartE2EDuration="1.24343937s" podCreationTimestamp="2025-07-07 00:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:15.22902444 +0000 UTC m=+1.332083852" watchObservedRunningTime="2025-07-07 00:17:15.24343937 +0000 UTC m=+1.346498789" Jul 7 00:17:15.259278 kubelet[2782]: I0707 00:17:15.258470 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" podStartSLOduration=1.258447764 podStartE2EDuration="1.258447764s" podCreationTimestamp="2025-07-07 00:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:15.244595606 +0000 UTC m=+1.347655024" watchObservedRunningTime="2025-07-07 00:17:15.258447764 +0000 UTC m=+1.361507183" Jul 7 00:17:15.259816 kubelet[2782]: I0707 00:17:15.259702 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" podStartSLOduration=3.259683396 podStartE2EDuration="3.259683396s" podCreationTimestamp="2025-07-07 00:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:15.257352078 +0000 UTC m=+1.360411495" watchObservedRunningTime="2025-07-07 00:17:15.259683396 +0000 UTC m=+1.362742814" Jul 7 00:17:18.486948 kubelet[2782]: I0707 00:17:18.486883 2782 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:17:18.488809 containerd[1580]: time="2025-07-07T00:17:18.487645989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:17:18.489473 kubelet[2782]: I0707 00:17:18.489427 2782 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:17:19.165891 systemd[1]: Created slice kubepods-besteffort-poda90bd969_693b_47e3_8214_5eb48d96946c.slice - libcontainer container kubepods-besteffort-poda90bd969_693b_47e3_8214_5eb48d96946c.slice. Jul 7 00:17:19.229866 kubelet[2782]: I0707 00:17:19.229488 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a90bd969-693b-47e3-8214-5eb48d96946c-kube-proxy\") pod \"kube-proxy-w8pf4\" (UID: \"a90bd969-693b-47e3-8214-5eb48d96946c\") " pod="kube-system/kube-proxy-w8pf4" Jul 7 00:17:19.230156 kubelet[2782]: I0707 00:17:19.229983 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a90bd969-693b-47e3-8214-5eb48d96946c-xtables-lock\") pod \"kube-proxy-w8pf4\" (UID: \"a90bd969-693b-47e3-8214-5eb48d96946c\") " pod="kube-system/kube-proxy-w8pf4" Jul 7 00:17:19.230156 kubelet[2782]: I0707 00:17:19.230060 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htt9s\" (UniqueName: \"kubernetes.io/projected/a90bd969-693b-47e3-8214-5eb48d96946c-kube-api-access-htt9s\") pod \"kube-proxy-w8pf4\" (UID: \"a90bd969-693b-47e3-8214-5eb48d96946c\") " pod="kube-system/kube-proxy-w8pf4" Jul 7 00:17:19.230156 kubelet[2782]: I0707 00:17:19.230099 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a90bd969-693b-47e3-8214-5eb48d96946c-lib-modules\") pod \"kube-proxy-w8pf4\" (UID: \"a90bd969-693b-47e3-8214-5eb48d96946c\") " pod="kube-system/kube-proxy-w8pf4" Jul 7 00:17:19.476662 containerd[1580]: time="2025-07-07T00:17:19.476319971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8pf4,Uid:a90bd969-693b-47e3-8214-5eb48d96946c,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:19.519541 containerd[1580]: time="2025-07-07T00:17:19.519373109Z" level=info msg="connecting to shim 6f55498bb9aa275f79ac49da492457682320515760a4de0a47f59c8e93c9d169" address="unix:///run/containerd/s/0878e26ac25c322c60fdc6c96947093d1f27d7db188dd0ac37ab43286948c574" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:19.576805 systemd[1]: Started cri-containerd-6f55498bb9aa275f79ac49da492457682320515760a4de0a47f59c8e93c9d169.scope - libcontainer container 6f55498bb9aa275f79ac49da492457682320515760a4de0a47f59c8e93c9d169. Jul 7 00:17:19.631299 systemd[1]: Created slice kubepods-besteffort-pod7dd0f1a4_90aa_4079_9d59_bb88f0b6b7d9.slice - libcontainer container kubepods-besteffort-pod7dd0f1a4_90aa_4079_9d59_bb88f0b6b7d9.slice. Jul 7 00:17:19.632777 kubelet[2782]: I0707 00:17:19.631979 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdkp\" (UniqueName: \"kubernetes.io/projected/7dd0f1a4-90aa-4079-9d59-bb88f0b6b7d9-kube-api-access-rxdkp\") pod \"tigera-operator-747864d56d-vzvrm\" (UID: \"7dd0f1a4-90aa-4079-9d59-bb88f0b6b7d9\") " pod="tigera-operator/tigera-operator-747864d56d-vzvrm" Jul 7 00:17:19.632777 kubelet[2782]: I0707 00:17:19.632039 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7dd0f1a4-90aa-4079-9d59-bb88f0b6b7d9-var-lib-calico\") pod \"tigera-operator-747864d56d-vzvrm\" (UID: \"7dd0f1a4-90aa-4079-9d59-bb88f0b6b7d9\") " pod="tigera-operator/tigera-operator-747864d56d-vzvrm" Jul 7 00:17:19.665980 containerd[1580]: time="2025-07-07T00:17:19.665934163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8pf4,Uid:a90bd969-693b-47e3-8214-5eb48d96946c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f55498bb9aa275f79ac49da492457682320515760a4de0a47f59c8e93c9d169\"" Jul 7 00:17:19.670788 containerd[1580]: time="2025-07-07T00:17:19.670746840Z" level=info msg="CreateContainer within sandbox \"6f55498bb9aa275f79ac49da492457682320515760a4de0a47f59c8e93c9d169\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:17:19.685064 containerd[1580]: time="2025-07-07T00:17:19.685019682Z" level=info msg="Container 6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:19.694580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430769823.mount: Deactivated successfully. Jul 7 00:17:19.699226 containerd[1580]: time="2025-07-07T00:17:19.699160378Z" level=info msg="CreateContainer within sandbox \"6f55498bb9aa275f79ac49da492457682320515760a4de0a47f59c8e93c9d169\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a\"" Jul 7 00:17:19.700185 containerd[1580]: time="2025-07-07T00:17:19.700155257Z" level=info msg="StartContainer for \"6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a\"" Jul 7 00:17:19.703272 containerd[1580]: time="2025-07-07T00:17:19.703149990Z" level=info msg="connecting to shim 6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a" address="unix:///run/containerd/s/0878e26ac25c322c60fdc6c96947093d1f27d7db188dd0ac37ab43286948c574" protocol=ttrpc version=3 Jul 7 00:17:19.730606 systemd[1]: Started cri-containerd-6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a.scope - libcontainer container 6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a. Jul 7 00:17:19.799457 containerd[1580]: time="2025-07-07T00:17:19.799412857Z" level=info msg="StartContainer for \"6faced993cf599b697880e0383d766d8aaf3977c07a41ea3d61d2d1bdfb88c0a\" returns successfully" Jul 7 00:17:19.940546 containerd[1580]: time="2025-07-07T00:17:19.940473493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-vzvrm,Uid:7dd0f1a4-90aa-4079-9d59-bb88f0b6b7d9,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:17:19.970905 containerd[1580]: time="2025-07-07T00:17:19.970456453Z" level=info msg="connecting to shim 035b23113f2f565fcea80b035d8af9c9f52868b70f8f2aaf7d603c8e69f9ec51" address="unix:///run/containerd/s/ffedbb57dad441bd6370a4e243b2d3203c3fb3b503c4c0302d86d1a891958e53" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:20.019464 systemd[1]: Started cri-containerd-035b23113f2f565fcea80b035d8af9c9f52868b70f8f2aaf7d603c8e69f9ec51.scope - libcontainer container 035b23113f2f565fcea80b035d8af9c9f52868b70f8f2aaf7d603c8e69f9ec51. Jul 7 00:17:20.084910 containerd[1580]: time="2025-07-07T00:17:20.084847103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-vzvrm,Uid:7dd0f1a4-90aa-4079-9d59-bb88f0b6b7d9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"035b23113f2f565fcea80b035d8af9c9f52868b70f8f2aaf7d603c8e69f9ec51\"" Jul 7 00:17:20.088740 containerd[1580]: time="2025-07-07T00:17:20.088675638Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:17:21.134758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318148526.mount: Deactivated successfully. Jul 7 00:17:22.120102 containerd[1580]: time="2025-07-07T00:17:22.120030133Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:22.121440 containerd[1580]: time="2025-07-07T00:17:22.121384347Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:17:22.122948 containerd[1580]: time="2025-07-07T00:17:22.122876283Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:22.125912 containerd[1580]: time="2025-07-07T00:17:22.125847414Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:22.127315 containerd[1580]: time="2025-07-07T00:17:22.126742570Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.038021777s" Jul 7 00:17:22.127315 containerd[1580]: time="2025-07-07T00:17:22.126788062Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:17:22.130199 containerd[1580]: time="2025-07-07T00:17:22.130155285Z" level=info msg="CreateContainer within sandbox \"035b23113f2f565fcea80b035d8af9c9f52868b70f8f2aaf7d603c8e69f9ec51\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:17:22.142277 containerd[1580]: time="2025-07-07T00:17:22.141368835Z" level=info msg="Container 424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:22.154061 containerd[1580]: time="2025-07-07T00:17:22.154011460Z" level=info msg="CreateContainer within sandbox \"035b23113f2f565fcea80b035d8af9c9f52868b70f8f2aaf7d603c8e69f9ec51\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f\"" Jul 7 00:17:22.154985 containerd[1580]: time="2025-07-07T00:17:22.154956366Z" level=info msg="StartContainer for \"424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f\"" Jul 7 00:17:22.156908 containerd[1580]: time="2025-07-07T00:17:22.156825218Z" level=info msg="connecting to shim 424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f" address="unix:///run/containerd/s/ffedbb57dad441bd6370a4e243b2d3203c3fb3b503c4c0302d86d1a891958e53" protocol=ttrpc version=3 Jul 7 00:17:22.187462 systemd[1]: Started cri-containerd-424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f.scope - libcontainer container 424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f. Jul 7 00:17:22.234068 containerd[1580]: time="2025-07-07T00:17:22.233982017Z" level=info msg="StartContainer for \"424d7153f8099acea85c3518a92f8703f74c7767f03b3c688714dbd1f092ec9f\" returns successfully" Jul 7 00:17:22.868748 kubelet[2782]: I0707 00:17:22.868532 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8pf4" podStartSLOduration=3.868509137 podStartE2EDuration="3.868509137s" podCreationTimestamp="2025-07-07 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:20.226948496 +0000 UTC m=+6.330007915" watchObservedRunningTime="2025-07-07 00:17:22.868509137 +0000 UTC m=+8.971568556" Jul 7 00:17:23.240497 kubelet[2782]: I0707 00:17:23.240053 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-vzvrm" podStartSLOduration=2.198445324 podStartE2EDuration="4.240029772s" podCreationTimestamp="2025-07-07 00:17:19 +0000 UTC" firstStartedPulling="2025-07-07 00:17:20.086537859 +0000 UTC m=+6.189597268" lastFinishedPulling="2025-07-07 00:17:22.128122312 +0000 UTC m=+8.231181716" observedRunningTime="2025-07-07 00:17:23.239805044 +0000 UTC m=+9.342864463" watchObservedRunningTime="2025-07-07 00:17:23.240029772 +0000 UTC m=+9.343089196" Jul 7 00:17:29.330860 sudo[1864]: pam_unix(sudo:session): session closed for user root Jul 7 00:17:29.374515 sshd[1863]: Connection closed by 139.178.68.195 port 59036 Jul 7 00:17:29.375507 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:29.386919 systemd[1]: sshd@8-10.128.0.74:22-139.178.68.195:59036.service: Deactivated successfully. Jul 7 00:17:29.394018 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:17:29.394769 systemd[1]: session-9.scope: Consumed 5.506s CPU time, 229.5M memory peak. Jul 7 00:17:29.400473 systemd-logind[1511]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:17:29.404564 systemd-logind[1511]: Removed session 9. Jul 7 00:17:34.968407 systemd[1]: Created slice kubepods-besteffort-podf841b166_7a31_436a_810f_aaa6ffb7089a.slice - libcontainer container kubepods-besteffort-podf841b166_7a31_436a_810f_aaa6ffb7089a.slice. Jul 7 00:17:35.042677 kubelet[2782]: I0707 00:17:35.042613 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f841b166-7a31-436a-810f-aaa6ffb7089a-typha-certs\") pod \"calico-typha-796df4c6b5-z5bcm\" (UID: \"f841b166-7a31-436a-810f-aaa6ffb7089a\") " pod="calico-system/calico-typha-796df4c6b5-z5bcm" Jul 7 00:17:35.043298 kubelet[2782]: I0707 00:17:35.042690 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsb4\" (UniqueName: \"kubernetes.io/projected/f841b166-7a31-436a-810f-aaa6ffb7089a-kube-api-access-9tsb4\") pod \"calico-typha-796df4c6b5-z5bcm\" (UID: \"f841b166-7a31-436a-810f-aaa6ffb7089a\") " pod="calico-system/calico-typha-796df4c6b5-z5bcm" Jul 7 00:17:35.043298 kubelet[2782]: I0707 00:17:35.042724 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f841b166-7a31-436a-810f-aaa6ffb7089a-tigera-ca-bundle\") pod \"calico-typha-796df4c6b5-z5bcm\" (UID: \"f841b166-7a31-436a-810f-aaa6ffb7089a\") " pod="calico-system/calico-typha-796df4c6b5-z5bcm" Jul 7 00:17:35.279475 systemd[1]: Created slice kubepods-besteffort-pod64a6dc4d_2206_43fc_9bc6_b313a5e04259.slice - libcontainer container kubepods-besteffort-pod64a6dc4d_2206_43fc_9bc6_b313a5e04259.slice. Jul 7 00:17:35.282376 containerd[1580]: time="2025-07-07T00:17:35.281852202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796df4c6b5-z5bcm,Uid:f841b166-7a31-436a-810f-aaa6ffb7089a,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:35.328278 containerd[1580]: time="2025-07-07T00:17:35.327988564Z" level=info msg="connecting to shim ce15a9cc7312b6ab54604323371661131caf343bc8701d2980b8ec1ee85b8723" address="unix:///run/containerd/s/abb5b730400982935fccc0d570a1831aab5be31ec7d90246e2efaa5da559dd23" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:35.346714 kubelet[2782]: I0707 00:17:35.345859 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-lib-modules\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.346714 kubelet[2782]: I0707 00:17:35.345925 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/64a6dc4d-2206-43fc-9bc6-b313a5e04259-node-certs\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.346714 kubelet[2782]: I0707 00:17:35.345960 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-var-lib-calico\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.346714 kubelet[2782]: I0707 00:17:35.345990 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-var-run-calico\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.346714 kubelet[2782]: I0707 00:17:35.346023 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-cni-bin-dir\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.347104 kubelet[2782]: I0707 00:17:35.346064 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-flexvol-driver-host\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.347104 kubelet[2782]: I0707 00:17:35.346096 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-policysync\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.347104 kubelet[2782]: I0707 00:17:35.346125 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-cni-net-dir\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.347104 kubelet[2782]: I0707 00:17:35.346153 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd69\" (UniqueName: \"kubernetes.io/projected/64a6dc4d-2206-43fc-9bc6-b313a5e04259-kube-api-access-fvd69\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.347104 kubelet[2782]: I0707 00:17:35.346188 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-cni-log-dir\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.350427 kubelet[2782]: I0707 00:17:35.346224 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64a6dc4d-2206-43fc-9bc6-b313a5e04259-tigera-ca-bundle\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.350427 kubelet[2782]: I0707 00:17:35.349533 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64a6dc4d-2206-43fc-9bc6-b313a5e04259-xtables-lock\") pod \"calico-node-vp26h\" (UID: \"64a6dc4d-2206-43fc-9bc6-b313a5e04259\") " pod="calico-system/calico-node-vp26h" Jul 7 00:17:35.385757 systemd[1]: Started cri-containerd-ce15a9cc7312b6ab54604323371661131caf343bc8701d2980b8ec1ee85b8723.scope - libcontainer container ce15a9cc7312b6ab54604323371661131caf343bc8701d2980b8ec1ee85b8723. Jul 7 00:17:35.460655 kubelet[2782]: E0707 00:17:35.460384 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.460655 kubelet[2782]: W0707 00:17:35.460414 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.460655 kubelet[2782]: E0707 00:17:35.460475 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.462405 kubelet[2782]: E0707 00:17:35.462148 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.462405 kubelet[2782]: W0707 00:17:35.462167 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.462405 kubelet[2782]: E0707 00:17:35.462192 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.464512 kubelet[2782]: E0707 00:17:35.464327 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.464512 kubelet[2782]: W0707 00:17:35.464348 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.464512 kubelet[2782]: E0707 00:17:35.464374 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.464932 kubelet[2782]: E0707 00:17:35.464914 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.472272 kubelet[2782]: W0707 00:17:35.469298 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.472272 kubelet[2782]: E0707 00:17:35.471352 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.472272 kubelet[2782]: W0707 00:17:35.471370 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.482387 kubelet[2782]: E0707 00:17:35.482355 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.483548 kubelet[2782]: E0707 00:17:35.482542 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.486309 kubelet[2782]: E0707 00:17:35.483478 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.486552 kubelet[2782]: W0707 00:17:35.486524 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.488203 kubelet[2782]: E0707 00:17:35.488119 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.488719 kubelet[2782]: W0707 00:17:35.488693 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.489288 kubelet[2782]: E0707 00:17:35.488487 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.490040 kubelet[2782]: E0707 00:17:35.489835 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.492476 kubelet[2782]: E0707 00:17:35.490618 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.492476 kubelet[2782]: W0707 00:17:35.491934 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.493903 kubelet[2782]: E0707 00:17:35.493569 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.493903 kubelet[2782]: W0707 00:17:35.493591 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.494758 kubelet[2782]: E0707 00:17:35.494616 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.494862 kubelet[2782]: W0707 00:17:35.494759 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.496143 kubelet[2782]: E0707 00:17:35.495187 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.496143 kubelet[2782]: E0707 00:17:35.495266 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.496143 kubelet[2782]: E0707 00:17:35.495287 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.497044 kubelet[2782]: E0707 00:17:35.497023 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.497170 kubelet[2782]: W0707 00:17:35.497152 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.497570 kubelet[2782]: E0707 00:17:35.497551 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.497704 kubelet[2782]: W0707 00:17:35.497686 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.498043 kubelet[2782]: E0707 00:17:35.498027 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.498156 kubelet[2782]: W0707 00:17:35.498134 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.498258 kubelet[2782]: E0707 00:17:35.498227 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.498685 kubelet[2782]: E0707 00:17:35.498668 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.498790 kubelet[2782]: W0707 00:17:35.498773 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.498891 kubelet[2782]: E0707 00:17:35.498875 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.498995 kubelet[2782]: E0707 00:17:35.498979 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.499411 kubelet[2782]: E0707 00:17:35.499392 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.499548 kubelet[2782]: W0707 00:17:35.499529 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.500512 kubelet[2782]: E0707 00:17:35.499640 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.511510 kubelet[2782]: E0707 00:17:35.509494 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.511723 containerd[1580]: time="2025-07-07T00:17:35.511682258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796df4c6b5-z5bcm,Uid:f841b166-7a31-436a-810f-aaa6ffb7089a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce15a9cc7312b6ab54604323371661131caf343bc8701d2980b8ec1ee85b8723\"" Jul 7 00:17:35.513671 kubelet[2782]: E0707 00:17:35.513638 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.514259 kubelet[2782]: W0707 00:17:35.514217 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.514418 kubelet[2782]: E0707 00:17:35.514398 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.516581 kubelet[2782]: E0707 00:17:35.516561 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.516710 kubelet[2782]: W0707 00:17:35.516691 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.516813 kubelet[2782]: E0707 00:17:35.516796 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.517503 kubelet[2782]: E0707 00:17:35.517452 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.517957 containerd[1580]: time="2025-07-07T00:17:35.517869659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:17:35.518108 kubelet[2782]: W0707 00:17:35.518084 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.518300 kubelet[2782]: E0707 00:17:35.518279 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.520046 kubelet[2782]: E0707 00:17:35.519964 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.520046 kubelet[2782]: W0707 00:17:35.519984 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.520046 kubelet[2782]: E0707 00:17:35.520013 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.546110 kubelet[2782]: E0707 00:17:35.545356 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:35.590280 containerd[1580]: time="2025-07-07T00:17:35.590098506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vp26h,Uid:64a6dc4d-2206-43fc-9bc6-b313a5e04259,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:35.630375 containerd[1580]: time="2025-07-07T00:17:35.629496153Z" level=info msg="connecting to shim 422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0" address="unix:///run/containerd/s/ff5c97520314bcde03e0e33e82d03d03cccb417e91a126c1fa4a6f2c91b1544c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:35.631989 kubelet[2782]: E0707 00:17:35.631956 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.632519 kubelet[2782]: W0707 00:17:35.632284 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.632519 kubelet[2782]: E0707 00:17:35.632338 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.633600 kubelet[2782]: E0707 00:17:35.633507 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.633701 kubelet[2782]: W0707 00:17:35.633531 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.633701 kubelet[2782]: E0707 00:17:35.633654 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.635261 kubelet[2782]: E0707 00:17:35.634747 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.635261 kubelet[2782]: W0707 00:17:35.634780 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.635261 kubelet[2782]: E0707 00:17:35.634806 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.636640 kubelet[2782]: E0707 00:17:35.636613 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.636640 kubelet[2782]: W0707 00:17:35.636637 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.637404 kubelet[2782]: E0707 00:17:35.637328 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.640340 kubelet[2782]: E0707 00:17:35.640315 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.640634 kubelet[2782]: W0707 00:17:35.640338 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.640634 kubelet[2782]: E0707 00:17:35.640536 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.641104 kubelet[2782]: E0707 00:17:35.641073 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.641104 kubelet[2782]: W0707 00:17:35.641095 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.641929 kubelet[2782]: E0707 00:17:35.641114 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.642267 kubelet[2782]: E0707 00:17:35.642223 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.642267 kubelet[2782]: W0707 00:17:35.642264 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.642576 kubelet[2782]: E0707 00:17:35.642285 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.643703 kubelet[2782]: E0707 00:17:35.643333 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.643703 kubelet[2782]: W0707 00:17:35.643353 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.643703 kubelet[2782]: E0707 00:17:35.643372 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.644516 kubelet[2782]: E0707 00:17:35.644395 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.644516 kubelet[2782]: W0707 00:17:35.644495 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.644516 kubelet[2782]: E0707 00:17:35.644518 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.645714 kubelet[2782]: E0707 00:17:35.645670 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.645714 kubelet[2782]: W0707 00:17:35.645695 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.645714 kubelet[2782]: E0707 00:17:35.645714 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.646116 kubelet[2782]: E0707 00:17:35.646014 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.646116 kubelet[2782]: W0707 00:17:35.646029 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.646116 kubelet[2782]: E0707 00:17:35.646045 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.646966 kubelet[2782]: E0707 00:17:35.646941 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.646966 kubelet[2782]: W0707 00:17:35.646965 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.647352 kubelet[2782]: E0707 00:17:35.646985 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.649842 kubelet[2782]: E0707 00:17:35.649747 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.649842 kubelet[2782]: W0707 00:17:35.649767 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.649842 kubelet[2782]: E0707 00:17:35.649785 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.650621 kubelet[2782]: E0707 00:17:35.650518 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.650621 kubelet[2782]: W0707 00:17:35.650536 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.650621 kubelet[2782]: E0707 00:17:35.650553 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.651420 kubelet[2782]: E0707 00:17:35.651358 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.651420 kubelet[2782]: W0707 00:17:35.651376 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.651420 kubelet[2782]: E0707 00:17:35.651394 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.651834 kubelet[2782]: E0707 00:17:35.651784 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.651834 kubelet[2782]: W0707 00:17:35.651800 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.651834 kubelet[2782]: E0707 00:17:35.651817 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.652526 kubelet[2782]: E0707 00:17:35.652499 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.652526 kubelet[2782]: W0707 00:17:35.652521 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.652677 kubelet[2782]: E0707 00:17:35.652539 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.654463 kubelet[2782]: E0707 00:17:35.654434 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.654573 kubelet[2782]: W0707 00:17:35.654454 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.654573 kubelet[2782]: E0707 00:17:35.654518 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.656471 kubelet[2782]: E0707 00:17:35.656442 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.656471 kubelet[2782]: W0707 00:17:35.656468 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.656615 kubelet[2782]: E0707 00:17:35.656489 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.656847 kubelet[2782]: E0707 00:17:35.656820 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.656847 kubelet[2782]: W0707 00:17:35.656845 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.657005 kubelet[2782]: E0707 00:17:35.656862 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.657376 kubelet[2782]: E0707 00:17:35.657329 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.657376 kubelet[2782]: W0707 00:17:35.657352 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.657376 kubelet[2782]: E0707 00:17:35.657370 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.657602 kubelet[2782]: I0707 00:17:35.657419 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6155bdc-cda1-4bd4-8088-60a9cf521c10-registration-dir\") pod \"csi-node-driver-xf2tr\" (UID: \"a6155bdc-cda1-4bd4-8088-60a9cf521c10\") " pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:35.660646 kubelet[2782]: E0707 00:17:35.659356 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.660646 kubelet[2782]: W0707 00:17:35.659379 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.660646 kubelet[2782]: E0707 00:17:35.659398 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.660646 kubelet[2782]: E0707 00:17:35.660233 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.660646 kubelet[2782]: W0707 00:17:35.660296 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.660646 kubelet[2782]: E0707 00:17:35.660390 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.661601 kubelet[2782]: E0707 00:17:35.661482 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.661601 kubelet[2782]: W0707 00:17:35.661506 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.661601 kubelet[2782]: E0707 00:17:35.661523 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.661601 kubelet[2782]: I0707 00:17:35.661574 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6l8c\" (UniqueName: \"kubernetes.io/projected/a6155bdc-cda1-4bd4-8088-60a9cf521c10-kube-api-access-b6l8c\") pod \"csi-node-driver-xf2tr\" (UID: \"a6155bdc-cda1-4bd4-8088-60a9cf521c10\") " pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:35.662533 kubelet[2782]: E0707 00:17:35.662503 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.662533 kubelet[2782]: W0707 00:17:35.662531 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.662689 kubelet[2782]: E0707 00:17:35.662570 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.663664 kubelet[2782]: E0707 00:17:35.663340 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.663664 kubelet[2782]: W0707 00:17:35.663372 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.663664 kubelet[2782]: E0707 00:17:35.663412 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.664901 kubelet[2782]: E0707 00:17:35.664875 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.664901 kubelet[2782]: W0707 00:17:35.664902 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.665027 kubelet[2782]: E0707 00:17:35.664919 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.665080 kubelet[2782]: I0707 00:17:35.665044 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6155bdc-cda1-4bd4-8088-60a9cf521c10-kubelet-dir\") pod \"csi-node-driver-xf2tr\" (UID: \"a6155bdc-cda1-4bd4-8088-60a9cf521c10\") " pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:35.666379 kubelet[2782]: E0707 00:17:35.665422 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.666379 kubelet[2782]: W0707 00:17:35.665443 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.666536 kubelet[2782]: E0707 00:17:35.666477 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.667681 kubelet[2782]: E0707 00:17:35.667652 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.667681 kubelet[2782]: W0707 00:17:35.667677 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.667844 kubelet[2782]: E0707 00:17:35.667697 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.667844 kubelet[2782]: I0707 00:17:35.667726 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a6155bdc-cda1-4bd4-8088-60a9cf521c10-varrun\") pod \"csi-node-driver-xf2tr\" (UID: \"a6155bdc-cda1-4bd4-8088-60a9cf521c10\") " pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:35.669590 kubelet[2782]: E0707 00:17:35.669564 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.669590 kubelet[2782]: W0707 00:17:35.669590 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.669757 kubelet[2782]: E0707 00:17:35.669609 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.670978 kubelet[2782]: E0707 00:17:35.670949 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.670978 kubelet[2782]: W0707 00:17:35.670974 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.671109 kubelet[2782]: E0707 00:17:35.670995 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.672068 kubelet[2782]: E0707 00:17:35.672045 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.672068 kubelet[2782]: W0707 00:17:35.672068 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.673149 kubelet[2782]: E0707 00:17:35.673120 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.674763 kubelet[2782]: E0707 00:17:35.674730 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.674763 kubelet[2782]: W0707 00:17:35.674751 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.674910 kubelet[2782]: E0707 00:17:35.674771 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.674910 kubelet[2782]: I0707 00:17:35.674802 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6155bdc-cda1-4bd4-8088-60a9cf521c10-socket-dir\") pod \"csi-node-driver-xf2tr\" (UID: \"a6155bdc-cda1-4bd4-8088-60a9cf521c10\") " pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:35.676610 kubelet[2782]: E0707 00:17:35.676558 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.676610 kubelet[2782]: W0707 00:17:35.676586 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.676874 kubelet[2782]: E0707 00:17:35.676719 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.678094 kubelet[2782]: E0707 00:17:35.677989 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.678094 kubelet[2782]: W0707 00:17:35.678014 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.678224 kubelet[2782]: E0707 00:17:35.678177 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.709047 systemd[1]: Started cri-containerd-422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0.scope - libcontainer container 422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0. Jul 7 00:17:35.757296 containerd[1580]: time="2025-07-07T00:17:35.757202907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vp26h,Uid:64a6dc4d-2206-43fc-9bc6-b313a5e04259,Namespace:calico-system,Attempt:0,} returns sandbox id \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\"" Jul 7 00:17:35.776724 kubelet[2782]: E0707 00:17:35.776688 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.776940 kubelet[2782]: W0707 00:17:35.776782 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.777479 kubelet[2782]: E0707 00:17:35.776937 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.778107 kubelet[2782]: E0707 00:17:35.778059 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.778107 kubelet[2782]: W0707 00:17:35.778086 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.778555 kubelet[2782]: E0707 00:17:35.778150 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.778648 kubelet[2782]: E0707 00:17:35.778636 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.778704 kubelet[2782]: W0707 00:17:35.778662 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.779185 kubelet[2782]: E0707 00:17:35.778766 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.779185 kubelet[2782]: E0707 00:17:35.779160 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.779361 kubelet[2782]: W0707 00:17:35.779206 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.779361 kubelet[2782]: E0707 00:17:35.779297 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.779956 kubelet[2782]: E0707 00:17:35.779816 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.779956 kubelet[2782]: W0707 00:17:35.779838 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.779956 kubelet[2782]: E0707 00:17:35.779903 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.780813 kubelet[2782]: E0707 00:17:35.780700 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.780813 kubelet[2782]: W0707 00:17:35.780719 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.780813 kubelet[2782]: E0707 00:17:35.780737 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.782778 kubelet[2782]: E0707 00:17:35.781413 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.782778 kubelet[2782]: W0707 00:17:35.781433 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.782778 kubelet[2782]: E0707 00:17:35.781740 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.782778 kubelet[2782]: E0707 00:17:35.782477 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.782778 kubelet[2782]: W0707 00:17:35.782493 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.782778 kubelet[2782]: E0707 00:17:35.782632 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.783798 kubelet[2782]: E0707 00:17:35.783589 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.783798 kubelet[2782]: W0707 00:17:35.783611 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.784394 kubelet[2782]: E0707 00:17:35.784369 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.784394 kubelet[2782]: W0707 00:17:35.784393 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.785180 kubelet[2782]: E0707 00:17:35.784804 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.785180 kubelet[2782]: E0707 00:17:35.784965 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.785180 kubelet[2782]: E0707 00:17:35.785051 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.785180 kubelet[2782]: W0707 00:17:35.785063 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.785180 kubelet[2782]: E0707 00:17:35.785129 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.785955 kubelet[2782]: E0707 00:17:35.785843 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.785955 kubelet[2782]: W0707 00:17:35.785866 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.786102 kubelet[2782]: E0707 00:17:35.786027 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.786606 kubelet[2782]: E0707 00:17:35.786578 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.786606 kubelet[2782]: W0707 00:17:35.786602 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.786974 kubelet[2782]: E0707 00:17:35.786759 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.787878 kubelet[2782]: E0707 00:17:35.787292 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.787878 kubelet[2782]: W0707 00:17:35.787313 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.787878 kubelet[2782]: E0707 00:17:35.787393 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.788112 kubelet[2782]: E0707 00:17:35.788090 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.788172 kubelet[2782]: W0707 00:17:35.788119 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.788877 kubelet[2782]: E0707 00:17:35.788679 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.789157 kubelet[2782]: W0707 00:17:35.789001 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.789441 kubelet[2782]: E0707 00:17:35.788837 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.789441 kubelet[2782]: E0707 00:17:35.789331 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.790919 kubelet[2782]: E0707 00:17:35.790361 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.790919 kubelet[2782]: W0707 00:17:35.790382 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.791134 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.791180 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.792272 kubelet[2782]: W0707 00:17:35.791300 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.791457 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.791807 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.792272 kubelet[2782]: W0707 00:17:35.791838 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.791925 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.792165 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.792272 kubelet[2782]: W0707 00:17:35.792175 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.792272 kubelet[2782]: E0707 00:17:35.792268 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.792801 kubelet[2782]: E0707 00:17:35.792654 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.792801 kubelet[2782]: W0707 00:17:35.792668 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.792801 kubelet[2782]: E0707 00:17:35.792718 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.793149 kubelet[2782]: E0707 00:17:35.793129 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.793149 kubelet[2782]: W0707 00:17:35.793149 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.793302 kubelet[2782]: E0707 00:17:35.793170 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.793782 kubelet[2782]: E0707 00:17:35.793758 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.793782 kubelet[2782]: W0707 00:17:35.793780 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.793927 kubelet[2782]: E0707 00:17:35.793804 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.794460 kubelet[2782]: E0707 00:17:35.794423 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.794571 kubelet[2782]: W0707 00:17:35.794474 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.794571 kubelet[2782]: E0707 00:17:35.794559 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.795263 kubelet[2782]: E0707 00:17:35.795011 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.795263 kubelet[2782]: W0707 00:17:35.795033 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.795263 kubelet[2782]: E0707 00:17:35.795079 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:35.807336 kubelet[2782]: E0707 00:17:35.807003 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:35.807457 kubelet[2782]: W0707 00:17:35.807358 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:35.807457 kubelet[2782]: E0707 00:17:35.807385 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:36.563980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987252305.mount: Deactivated successfully. Jul 7 00:17:37.145689 kubelet[2782]: E0707 00:17:37.145632 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:37.835908 containerd[1580]: time="2025-07-07T00:17:37.835842385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:37.837160 containerd[1580]: time="2025-07-07T00:17:37.837091131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:17:37.838497 containerd[1580]: time="2025-07-07T00:17:37.838431525Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:37.841157 containerd[1580]: time="2025-07-07T00:17:37.841098645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:37.842264 containerd[1580]: time="2025-07-07T00:17:37.841943030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.323993767s" Jul 7 00:17:37.842264 containerd[1580]: time="2025-07-07T00:17:37.841986021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:17:37.844093 containerd[1580]: time="2025-07-07T00:17:37.843808686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:17:37.870548 containerd[1580]: time="2025-07-07T00:17:37.870118589Z" level=info msg="CreateContainer within sandbox \"ce15a9cc7312b6ab54604323371661131caf343bc8701d2980b8ec1ee85b8723\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:17:37.883300 containerd[1580]: time="2025-07-07T00:17:37.881415650Z" level=info msg="Container 3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:37.899575 containerd[1580]: time="2025-07-07T00:17:37.899490521Z" level=info msg="CreateContainer within sandbox \"ce15a9cc7312b6ab54604323371661131caf343bc8701d2980b8ec1ee85b8723\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e\"" Jul 7 00:17:37.900666 containerd[1580]: time="2025-07-07T00:17:37.900618766Z" level=info msg="StartContainer for \"3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e\"" Jul 7 00:17:37.903793 containerd[1580]: time="2025-07-07T00:17:37.903745140Z" level=info msg="connecting to shim 3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e" address="unix:///run/containerd/s/abb5b730400982935fccc0d570a1831aab5be31ec7d90246e2efaa5da559dd23" protocol=ttrpc version=3 Jul 7 00:17:37.946847 systemd[1]: Started cri-containerd-3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e.scope - libcontainer container 3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e. Jul 7 00:17:38.024165 containerd[1580]: time="2025-07-07T00:17:38.024043498Z" level=info msg="StartContainer for \"3bae906b4c313529e4d99861404432276c4175e80584d9a784e9095263c8267e\" returns successfully" Jul 7 00:17:38.152233 kubelet[2782]: E0707 00:17:38.151581 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:38.380393 kubelet[2782]: E0707 00:17:38.380340 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.380773 kubelet[2782]: W0707 00:17:38.380625 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.380773 kubelet[2782]: E0707 00:17:38.380670 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.381439 kubelet[2782]: E0707 00:17:38.381323 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.381439 kubelet[2782]: W0707 00:17:38.381345 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.381439 kubelet[2782]: E0707 00:17:38.381366 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.382028 kubelet[2782]: E0707 00:17:38.381967 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.382028 kubelet[2782]: W0707 00:17:38.381986 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.382313 kubelet[2782]: E0707 00:17:38.382005 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.382720 kubelet[2782]: E0707 00:17:38.382658 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.382720 kubelet[2782]: W0707 00:17:38.382676 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.382720 kubelet[2782]: E0707 00:17:38.382695 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.383465 kubelet[2782]: E0707 00:17:38.383407 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.383465 kubelet[2782]: W0707 00:17:38.383427 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.384369 kubelet[2782]: E0707 00:17:38.383445 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.385062 kubelet[2782]: E0707 00:17:38.384962 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.385062 kubelet[2782]: W0707 00:17:38.384980 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.385062 kubelet[2782]: E0707 00:17:38.384997 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.385952 kubelet[2782]: E0707 00:17:38.385748 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.385952 kubelet[2782]: W0707 00:17:38.385766 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.385952 kubelet[2782]: E0707 00:17:38.385783 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.387278 kubelet[2782]: E0707 00:17:38.387186 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.387278 kubelet[2782]: W0707 00:17:38.387225 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.387537 kubelet[2782]: E0707 00:17:38.387432 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.387980 kubelet[2782]: E0707 00:17:38.387914 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.388272 kubelet[2782]: W0707 00:17:38.388094 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.388272 kubelet[2782]: E0707 00:17:38.388122 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.388737 kubelet[2782]: E0707 00:17:38.388590 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.388737 kubelet[2782]: W0707 00:17:38.388611 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.389198 kubelet[2782]: E0707 00:17:38.388897 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.389615 kubelet[2782]: E0707 00:17:38.389476 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.389615 kubelet[2782]: W0707 00:17:38.389498 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.390017 kubelet[2782]: E0707 00:17:38.389820 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.391512 kubelet[2782]: E0707 00:17:38.391342 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.391512 kubelet[2782]: W0707 00:17:38.391361 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.391512 kubelet[2782]: E0707 00:17:38.391379 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.391908 kubelet[2782]: E0707 00:17:38.391801 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.391908 kubelet[2782]: W0707 00:17:38.391820 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.391908 kubelet[2782]: E0707 00:17:38.391838 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.392522 kubelet[2782]: E0707 00:17:38.392409 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.392522 kubelet[2782]: W0707 00:17:38.392427 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.392522 kubelet[2782]: E0707 00:17:38.392444 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.393194 kubelet[2782]: E0707 00:17:38.392986 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.393194 kubelet[2782]: W0707 00:17:38.393033 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.393194 kubelet[2782]: E0707 00:17:38.393053 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.404386 kubelet[2782]: E0707 00:17:38.403600 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.406448 kubelet[2782]: W0707 00:17:38.404533 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.406448 kubelet[2782]: E0707 00:17:38.404569 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.406926 kubelet[2782]: E0707 00:17:38.406875 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.406926 kubelet[2782]: W0707 00:17:38.406894 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.407507 kubelet[2782]: E0707 00:17:38.407110 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.409895 kubelet[2782]: E0707 00:17:38.409851 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.409895 kubelet[2782]: W0707 00:17:38.409871 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.410187 kubelet[2782]: E0707 00:17:38.410052 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.410555 kubelet[2782]: E0707 00:17:38.410516 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.410555 kubelet[2782]: W0707 00:17:38.410535 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.410815 kubelet[2782]: E0707 00:17:38.410773 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.411172 kubelet[2782]: E0707 00:17:38.411131 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.411172 kubelet[2782]: W0707 00:17:38.411149 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.411502 kubelet[2782]: E0707 00:17:38.411415 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.414264 kubelet[2782]: E0707 00:17:38.412437 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.414264 kubelet[2782]: W0707 00:17:38.412454 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.415037 kubelet[2782]: E0707 00:17:38.415016 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.415149 kubelet[2782]: W0707 00:17:38.415134 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.415567 kubelet[2782]: E0707 00:17:38.415550 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.415878 kubelet[2782]: W0707 00:17:38.415653 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.415878 kubelet[2782]: E0707 00:17:38.415675 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.416674 kubelet[2782]: E0707 00:17:38.416088 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.416674 kubelet[2782]: E0707 00:17:38.416149 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.416674 kubelet[2782]: E0707 00:17:38.416286 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.416674 kubelet[2782]: W0707 00:17:38.416297 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.417297 kubelet[2782]: E0707 00:17:38.417063 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.417297 kubelet[2782]: W0707 00:17:38.417136 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.417297 kubelet[2782]: E0707 00:17:38.417164 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.418937 kubelet[2782]: E0707 00:17:38.418736 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.418937 kubelet[2782]: W0707 00:17:38.418755 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.418937 kubelet[2782]: E0707 00:17:38.418772 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.420464 kubelet[2782]: E0707 00:17:38.420424 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.420735 kubelet[2782]: W0707 00:17:38.420685 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.420986 kubelet[2782]: E0707 00:17:38.420445 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.420986 kubelet[2782]: E0707 00:17:38.420869 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.422438 kubelet[2782]: E0707 00:17:38.422310 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.422438 kubelet[2782]: W0707 00:17:38.422330 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.422438 kubelet[2782]: E0707 00:17:38.422383 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.423134 kubelet[2782]: E0707 00:17:38.423091 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.424936 kubelet[2782]: W0707 00:17:38.424908 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.425273 kubelet[2782]: E0707 00:17:38.425049 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.425768 kubelet[2782]: E0707 00:17:38.425693 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.425768 kubelet[2782]: W0707 00:17:38.425711 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.425768 kubelet[2782]: E0707 00:17:38.425732 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.427805 kubelet[2782]: E0707 00:17:38.427762 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.427805 kubelet[2782]: W0707 00:17:38.427787 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.427805 kubelet[2782]: E0707 00:17:38.427806 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.428611 kubelet[2782]: E0707 00:17:38.428529 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.428611 kubelet[2782]: W0707 00:17:38.428550 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.428611 kubelet[2782]: E0707 00:17:38.428568 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.430508 kubelet[2782]: E0707 00:17:38.430485 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:17:38.430508 kubelet[2782]: W0707 00:17:38.430506 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:17:38.430657 kubelet[2782]: E0707 00:17:38.430525 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:17:38.723334 containerd[1580]: time="2025-07-07T00:17:38.723162690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:38.725166 containerd[1580]: time="2025-07-07T00:17:38.725003232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:17:38.727122 containerd[1580]: time="2025-07-07T00:17:38.727040206Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:38.729818 containerd[1580]: time="2025-07-07T00:17:38.729728207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:38.730828 containerd[1580]: time="2025-07-07T00:17:38.730551603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 886.698706ms" Jul 7 00:17:38.730828 containerd[1580]: time="2025-07-07T00:17:38.730596758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:17:38.733575 containerd[1580]: time="2025-07-07T00:17:38.733502900Z" level=info msg="CreateContainer within sandbox \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:17:38.743098 containerd[1580]: time="2025-07-07T00:17:38.743053509Z" level=info msg="Container 2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:38.765501 containerd[1580]: time="2025-07-07T00:17:38.765451416Z" level=info msg="CreateContainer within sandbox \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\"" Jul 7 00:17:38.766169 containerd[1580]: time="2025-07-07T00:17:38.766115373Z" level=info msg="StartContainer for \"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\"" Jul 7 00:17:38.768307 containerd[1580]: time="2025-07-07T00:17:38.768267742Z" level=info msg="connecting to shim 2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61" address="unix:///run/containerd/s/ff5c97520314bcde03e0e33e82d03d03cccb417e91a126c1fa4a6f2c91b1544c" protocol=ttrpc version=3 Jul 7 00:17:38.795453 systemd[1]: Started cri-containerd-2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61.scope - libcontainer container 2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61. Jul 7 00:17:38.866197 containerd[1580]: time="2025-07-07T00:17:38.866145488Z" level=info msg="StartContainer for \"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\" returns successfully" Jul 7 00:17:38.879826 systemd[1]: cri-containerd-2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61.scope: Deactivated successfully. Jul 7 00:17:38.888097 containerd[1580]: time="2025-07-07T00:17:38.887923133Z" level=info msg="received exit event container_id:\"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\" id:\"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\" pid:3468 exited_at:{seconds:1751847458 nanos:886915523}" Jul 7 00:17:38.888443 containerd[1580]: time="2025-07-07T00:17:38.888072429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\" id:\"2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61\" pid:3468 exited_at:{seconds:1751847458 nanos:886915523}" Jul 7 00:17:38.923122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2634a2b9a62369fb8b14b1250b3667053435ed8d63957afc0694b4d062be5b61-rootfs.mount: Deactivated successfully. Jul 7 00:17:39.290148 kubelet[2782]: I0707 00:17:39.289576 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:17:39.311296 kubelet[2782]: I0707 00:17:39.311203 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-796df4c6b5-z5bcm" podStartSLOduration=2.984730625 podStartE2EDuration="5.311179694s" podCreationTimestamp="2025-07-07 00:17:34 +0000 UTC" firstStartedPulling="2025-07-07 00:17:35.5168342 +0000 UTC m=+21.619893609" lastFinishedPulling="2025-07-07 00:17:37.843283253 +0000 UTC m=+23.946342678" observedRunningTime="2025-07-07 00:17:38.351581934 +0000 UTC m=+24.454641353" watchObservedRunningTime="2025-07-07 00:17:39.311179694 +0000 UTC m=+25.414239111" Jul 7 00:17:40.145693 kubelet[2782]: E0707 00:17:40.144440 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:41.299676 containerd[1580]: time="2025-07-07T00:17:41.299623570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:17:42.145810 kubelet[2782]: E0707 00:17:42.145675 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:44.145126 kubelet[2782]: E0707 00:17:44.145049 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:44.377471 containerd[1580]: time="2025-07-07T00:17:44.377392556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:44.379158 containerd[1580]: time="2025-07-07T00:17:44.379129560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:17:44.380789 containerd[1580]: time="2025-07-07T00:17:44.380752562Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:44.385722 containerd[1580]: time="2025-07-07T00:17:44.385629953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:44.386997 containerd[1580]: time="2025-07-07T00:17:44.386963345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.087236278s" Jul 7 00:17:44.387142 containerd[1580]: time="2025-07-07T00:17:44.387119353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:17:44.392321 containerd[1580]: time="2025-07-07T00:17:44.392286192Z" level=info msg="CreateContainer within sandbox \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:17:44.413852 containerd[1580]: time="2025-07-07T00:17:44.413672883Z" level=info msg="Container 9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:44.425634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633347074.mount: Deactivated successfully. Jul 7 00:17:44.435589 containerd[1580]: time="2025-07-07T00:17:44.435526139Z" level=info msg="CreateContainer within sandbox \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\"" Jul 7 00:17:44.436440 containerd[1580]: time="2025-07-07T00:17:44.436408556Z" level=info msg="StartContainer for \"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\"" Jul 7 00:17:44.438997 containerd[1580]: time="2025-07-07T00:17:44.438957944Z" level=info msg="connecting to shim 9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c" address="unix:///run/containerd/s/ff5c97520314bcde03e0e33e82d03d03cccb417e91a126c1fa4a6f2c91b1544c" protocol=ttrpc version=3 Jul 7 00:17:44.475533 systemd[1]: Started cri-containerd-9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c.scope - libcontainer container 9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c. Jul 7 00:17:44.544440 containerd[1580]: time="2025-07-07T00:17:44.544392161Z" level=info msg="StartContainer for \"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\" returns successfully" Jul 7 00:17:45.557661 containerd[1580]: time="2025-07-07T00:17:45.557593288Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:17:45.560482 systemd[1]: cri-containerd-9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c.scope: Deactivated successfully. Jul 7 00:17:45.562432 systemd[1]: cri-containerd-9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c.scope: Consumed 655ms CPU time, 191.6M memory peak, 171.2M written to disk. Jul 7 00:17:45.564025 containerd[1580]: time="2025-07-07T00:17:45.563864530Z" level=info msg="received exit event container_id:\"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\" id:\"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\" pid:3528 exited_at:{seconds:1751847465 nanos:563315453}" Jul 7 00:17:45.564777 containerd[1580]: time="2025-07-07T00:17:45.564713973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\" id:\"9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c\" pid:3528 exited_at:{seconds:1751847465 nanos:563315453}" Jul 7 00:17:45.588951 kubelet[2782]: I0707 00:17:45.588920 2782 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:17:45.617491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f27e4202ab591e71faa81222f0060a0ebe1e1a2f1b6317403b82853938fcb4c-rootfs.mount: Deactivated successfully. Jul 7 00:17:45.669605 systemd[1]: Created slice kubepods-burstable-pod2e6e6cc9_ae3e_4fd0_b9f4_9a18bdb005fd.slice - libcontainer container kubepods-burstable-pod2e6e6cc9_ae3e_4fd0_b9f4_9a18bdb005fd.slice. Jul 7 00:17:45.676295 kubelet[2782]: I0707 00:17:45.676184 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc6r6\" (UniqueName: \"kubernetes.io/projected/e1346988-cc35-4eb9-a474-b85af67f6104-kube-api-access-vc6r6\") pod \"coredns-668d6bf9bc-srsnf\" (UID: \"e1346988-cc35-4eb9-a474-b85af67f6104\") " pod="kube-system/coredns-668d6bf9bc-srsnf" Jul 7 00:17:45.689791 kubelet[2782]: I0707 00:17:45.689704 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1346988-cc35-4eb9-a474-b85af67f6104-config-volume\") pod \"coredns-668d6bf9bc-srsnf\" (UID: \"e1346988-cc35-4eb9-a474-b85af67f6104\") " pod="kube-system/coredns-668d6bf9bc-srsnf" Jul 7 00:17:45.691446 kubelet[2782]: I0707 00:17:45.689772 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd-config-volume\") pod \"coredns-668d6bf9bc-fjkxw\" (UID: \"2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd\") " pod="kube-system/coredns-668d6bf9bc-fjkxw" Jul 7 00:17:45.691700 kubelet[2782]: I0707 00:17:45.691486 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l29w\" (UniqueName: \"kubernetes.io/projected/2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd-kube-api-access-9l29w\") pod \"coredns-668d6bf9bc-fjkxw\" (UID: \"2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd\") " pod="kube-system/coredns-668d6bf9bc-fjkxw" Jul 7 00:17:45.716867 systemd[1]: Created slice kubepods-burstable-pode1346988_cc35_4eb9_a474_b85af67f6104.slice - libcontainer container kubepods-burstable-pode1346988_cc35_4eb9_a474_b85af67f6104.slice. Jul 7 00:17:45.743007 systemd[1]: Created slice kubepods-besteffort-podbed4cce4_6ae5_4b99_ae24_8406be485d96.slice - libcontainer container kubepods-besteffort-podbed4cce4_6ae5_4b99_ae24_8406be485d96.slice. Jul 7 00:17:45.759625 systemd[1]: Created slice kubepods-besteffort-podc378be6c_72b3_4024_8c67_72fb1764e8ac.slice - libcontainer container kubepods-besteffort-podc378be6c_72b3_4024_8c67_72fb1764e8ac.slice. Jul 7 00:17:45.773305 systemd[1]: Created slice kubepods-besteffort-podf4f95e6f_b466_498b_a081_ef0cabb35977.slice - libcontainer container kubepods-besteffort-podf4f95e6f_b466_498b_a081_ef0cabb35977.slice. Jul 7 00:17:45.788184 systemd[1]: Created slice kubepods-besteffort-podcbad238c_2609_4954_946f_f27aba4121fb.slice - libcontainer container kubepods-besteffort-podcbad238c_2609_4954_946f_f27aba4121fb.slice. Jul 7 00:17:45.792691 kubelet[2782]: I0707 00:17:45.792634 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffhtr\" (UniqueName: \"kubernetes.io/projected/cbad238c-2609-4954-946f-f27aba4121fb-kube-api-access-ffhtr\") pod \"whisker-7f5965bf9-q9fmf\" (UID: \"cbad238c-2609-4954-946f-f27aba4121fb\") " pod="calico-system/whisker-7f5965bf9-q9fmf" Jul 7 00:17:45.792691 kubelet[2782]: I0707 00:17:45.792712 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbad238c-2609-4954-946f-f27aba4121fb-whisker-backend-key-pair\") pod \"whisker-7f5965bf9-q9fmf\" (UID: \"cbad238c-2609-4954-946f-f27aba4121fb\") " pod="calico-system/whisker-7f5965bf9-q9fmf" Jul 7 00:17:45.792691 kubelet[2782]: I0707 00:17:45.792763 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zv55\" (UniqueName: \"kubernetes.io/projected/c378be6c-72b3-4024-8c67-72fb1764e8ac-kube-api-access-9zv55\") pod \"calico-apiserver-6f45f85b7-gq49t\" (UID: \"c378be6c-72b3-4024-8c67-72fb1764e8ac\") " pod="calico-apiserver/calico-apiserver-6f45f85b7-gq49t" Jul 7 00:17:45.792691 kubelet[2782]: I0707 00:17:45.792795 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbad238c-2609-4954-946f-f27aba4121fb-whisker-ca-bundle\") pod \"whisker-7f5965bf9-q9fmf\" (UID: \"cbad238c-2609-4954-946f-f27aba4121fb\") " pod="calico-system/whisker-7f5965bf9-q9fmf" Jul 7 00:17:45.792691 kubelet[2782]: I0707 00:17:45.792847 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc888973-7a8d-44bc-afa1-d07672f1bdb4-calico-apiserver-certs\") pod \"calico-apiserver-6f45f85b7-4c6tq\" (UID: \"fc888973-7a8d-44bc-afa1-d07672f1bdb4\") " pod="calico-apiserver/calico-apiserver-6f45f85b7-4c6tq" Jul 7 00:17:45.793941 kubelet[2782]: I0707 00:17:45.792878 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c378be6c-72b3-4024-8c67-72fb1764e8ac-calico-apiserver-certs\") pod \"calico-apiserver-6f45f85b7-gq49t\" (UID: \"c378be6c-72b3-4024-8c67-72fb1764e8ac\") " pod="calico-apiserver/calico-apiserver-6f45f85b7-gq49t" Jul 7 00:17:45.793941 kubelet[2782]: I0707 00:17:45.792929 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf5kf\" (UniqueName: \"kubernetes.io/projected/bed4cce4-6ae5-4b99-ae24-8406be485d96-kube-api-access-xf5kf\") pod \"goldmane-768f4c5c69-q26kq\" (UID: \"bed4cce4-6ae5-4b99-ae24-8406be485d96\") " pod="calico-system/goldmane-768f4c5c69-q26kq" Jul 7 00:17:45.793941 kubelet[2782]: I0707 00:17:45.792958 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4f95e6f-b466-498b-a081-ef0cabb35977-tigera-ca-bundle\") pod \"calico-kube-controllers-57d59d975-pdszg\" (UID: \"f4f95e6f-b466-498b-a081-ef0cabb35977\") " pod="calico-system/calico-kube-controllers-57d59d975-pdszg" Jul 7 00:17:45.793941 kubelet[2782]: I0707 00:17:45.793038 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bed4cce4-6ae5-4b99-ae24-8406be485d96-goldmane-key-pair\") pod \"goldmane-768f4c5c69-q26kq\" (UID: \"bed4cce4-6ae5-4b99-ae24-8406be485d96\") " pod="calico-system/goldmane-768f4c5c69-q26kq" Jul 7 00:17:45.793941 kubelet[2782]: I0707 00:17:45.793090 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg2tq\" (UniqueName: \"kubernetes.io/projected/f4f95e6f-b466-498b-a081-ef0cabb35977-kube-api-access-pg2tq\") pod \"calico-kube-controllers-57d59d975-pdszg\" (UID: \"f4f95e6f-b466-498b-a081-ef0cabb35977\") " pod="calico-system/calico-kube-controllers-57d59d975-pdszg" Jul 7 00:17:45.794194 kubelet[2782]: I0707 00:17:45.793137 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqcc\" (UniqueName: \"kubernetes.io/projected/fc888973-7a8d-44bc-afa1-d07672f1bdb4-kube-api-access-8rqcc\") pod \"calico-apiserver-6f45f85b7-4c6tq\" (UID: \"fc888973-7a8d-44bc-afa1-d07672f1bdb4\") " pod="calico-apiserver/calico-apiserver-6f45f85b7-4c6tq" Jul 7 00:17:45.794194 kubelet[2782]: I0707 00:17:45.793190 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed4cce4-6ae5-4b99-ae24-8406be485d96-config\") pod \"goldmane-768f4c5c69-q26kq\" (UID: \"bed4cce4-6ae5-4b99-ae24-8406be485d96\") " pod="calico-system/goldmane-768f4c5c69-q26kq" Jul 7 00:17:45.794194 kubelet[2782]: I0707 00:17:45.793217 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed4cce4-6ae5-4b99-ae24-8406be485d96-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-q26kq\" (UID: \"bed4cce4-6ae5-4b99-ae24-8406be485d96\") " pod="calico-system/goldmane-768f4c5c69-q26kq" Jul 7 00:17:45.802158 systemd[1]: Created slice kubepods-besteffort-podfc888973_7a8d_44bc_afa1_d07672f1bdb4.slice - libcontainer container kubepods-besteffort-podfc888973_7a8d_44bc_afa1_d07672f1bdb4.slice. Jul 7 00:17:45.999892 containerd[1580]: time="2025-07-07T00:17:45.999698908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjkxw,Uid:2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:46.032272 containerd[1580]: time="2025-07-07T00:17:46.032194885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srsnf,Uid:e1346988-cc35-4eb9-a474-b85af67f6104,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:46.054982 containerd[1580]: time="2025-07-07T00:17:46.054890557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q26kq,Uid:bed4cce4-6ae5-4b99-ae24-8406be485d96,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:46.066180 containerd[1580]: time="2025-07-07T00:17:46.066109144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-gq49t,Uid:c378be6c-72b3-4024-8c67-72fb1764e8ac,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:17:46.079192 containerd[1580]: time="2025-07-07T00:17:46.079129631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d59d975-pdszg,Uid:f4f95e6f-b466-498b-a081-ef0cabb35977,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:46.100785 containerd[1580]: time="2025-07-07T00:17:46.100737515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5965bf9-q9fmf,Uid:cbad238c-2609-4954-946f-f27aba4121fb,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:46.109731 containerd[1580]: time="2025-07-07T00:17:46.109685879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-4c6tq,Uid:fc888973-7a8d-44bc-afa1-d07672f1bdb4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:17:46.157346 systemd[1]: Created slice kubepods-besteffort-poda6155bdc_cda1_4bd4_8088_60a9cf521c10.slice - libcontainer container kubepods-besteffort-poda6155bdc_cda1_4bd4_8088_60a9cf521c10.slice. Jul 7 00:17:46.162274 containerd[1580]: time="2025-07-07T00:17:46.162209081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xf2tr,Uid:a6155bdc-cda1-4bd4-8088-60a9cf521c10,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:46.343865 containerd[1580]: time="2025-07-07T00:17:46.343821825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:17:46.622338 containerd[1580]: time="2025-07-07T00:17:46.622141028Z" level=error msg="Failed to destroy network for sandbox \"afe0b7e964ae7abdcd08b1fc1c83de3d73b902071143a3a1c3f7f6a4fca0ee28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.631086 containerd[1580]: time="2025-07-07T00:17:46.630806071Z" level=error msg="Failed to destroy network for sandbox \"42d53f0dfdd77c29f0448d09e84a2c847754e69920128d3d2dbdbed3ecc28686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.639363 containerd[1580]: time="2025-07-07T00:17:46.633066746Z" level=error msg="Failed to destroy network for sandbox \"5c087cb81ac3b6cb01d478fc439db83d5a7a639a35dd89b1c32a892c905849df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.639706 containerd[1580]: time="2025-07-07T00:17:46.633850072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q26kq,Uid:bed4cce4-6ae5-4b99-ae24-8406be485d96,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe0b7e964ae7abdcd08b1fc1c83de3d73b902071143a3a1c3f7f6a4fca0ee28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.640542 kubelet[2782]: E0707 00:17:46.640388 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe0b7e964ae7abdcd08b1fc1c83de3d73b902071143a3a1c3f7f6a4fca0ee28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.641896 kubelet[2782]: E0707 00:17:46.641182 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe0b7e964ae7abdcd08b1fc1c83de3d73b902071143a3a1c3f7f6a4fca0ee28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q26kq" Jul 7 00:17:46.641896 kubelet[2782]: E0707 00:17:46.641256 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe0b7e964ae7abdcd08b1fc1c83de3d73b902071143a3a1c3f7f6a4fca0ee28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q26kq" Jul 7 00:17:46.641896 kubelet[2782]: E0707 00:17:46.641382 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-q26kq_calico-system(bed4cce4-6ae5-4b99-ae24-8406be485d96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-q26kq_calico-system(bed4cce4-6ae5-4b99-ae24-8406be485d96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe0b7e964ae7abdcd08b1fc1c83de3d73b902071143a3a1c3f7f6a4fca0ee28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-q26kq" podUID="bed4cce4-6ae5-4b99-ae24-8406be485d96" Jul 7 00:17:46.642203 containerd[1580]: time="2025-07-07T00:17:46.641534493Z" level=error msg="Failed to destroy network for sandbox \"3273a295106004f32a5e50052d48b525f6321a1dc747f72ff077f4b1c21b6662\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.653347 containerd[1580]: time="2025-07-07T00:17:46.653206711Z" level=error msg="Failed to destroy network for sandbox \"6390d70df2750905cc0b0ab972b01eb8b360f2a6579acfdcf8afea2d930e92af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.653921 containerd[1580]: time="2025-07-07T00:17:46.653731801Z" level=error msg="Failed to destroy network for sandbox \"1c480920237b3b294f6bf6209e341eb1b43916731522429b9ac70e38fa4891f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.660358 containerd[1580]: time="2025-07-07T00:17:46.660299320Z" level=error msg="Failed to destroy network for sandbox \"6ac031daccd4b2bc72271faa4461ebb02fb747add7514653f420304f4dfe787b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.664945 containerd[1580]: time="2025-07-07T00:17:46.664645156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjkxw,Uid:2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d53f0dfdd77c29f0448d09e84a2c847754e69920128d3d2dbdbed3ecc28686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.669009 kubelet[2782]: E0707 00:17:46.666838 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d53f0dfdd77c29f0448d09e84a2c847754e69920128d3d2dbdbed3ecc28686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.669009 kubelet[2782]: E0707 00:17:46.666911 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d53f0dfdd77c29f0448d09e84a2c847754e69920128d3d2dbdbed3ecc28686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjkxw" Jul 7 00:17:46.669009 kubelet[2782]: E0707 00:17:46.666942 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d53f0dfdd77c29f0448d09e84a2c847754e69920128d3d2dbdbed3ecc28686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjkxw" Jul 7 00:17:46.669408 kubelet[2782]: E0707 00:17:46.667002 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fjkxw_kube-system(2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fjkxw_kube-system(2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42d53f0dfdd77c29f0448d09e84a2c847754e69920128d3d2dbdbed3ecc28686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fjkxw" podUID="2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd" Jul 7 00:17:46.671767 containerd[1580]: time="2025-07-07T00:17:46.671588939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d59d975-pdszg,Uid:f4f95e6f-b466-498b-a081-ef0cabb35977,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c087cb81ac3b6cb01d478fc439db83d5a7a639a35dd89b1c32a892c905849df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.673593 containerd[1580]: time="2025-07-07T00:17:46.673365176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xf2tr,Uid:a6155bdc-cda1-4bd4-8088-60a9cf521c10,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3273a295106004f32a5e50052d48b525f6321a1dc747f72ff077f4b1c21b6662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.673730 kubelet[2782]: E0707 00:17:46.673384 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c087cb81ac3b6cb01d478fc439db83d5a7a639a35dd89b1c32a892c905849df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.676273 kubelet[2782]: E0707 00:17:46.673573 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c087cb81ac3b6cb01d478fc439db83d5a7a639a35dd89b1c32a892c905849df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d59d975-pdszg" Jul 7 00:17:46.676273 kubelet[2782]: E0707 00:17:46.673959 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c087cb81ac3b6cb01d478fc439db83d5a7a639a35dd89b1c32a892c905849df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57d59d975-pdszg" Jul 7 00:17:46.677164 kubelet[2782]: E0707 00:17:46.674888 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57d59d975-pdszg_calico-system(f4f95e6f-b466-498b-a081-ef0cabb35977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57d59d975-pdszg_calico-system(f4f95e6f-b466-498b-a081-ef0cabb35977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c087cb81ac3b6cb01d478fc439db83d5a7a639a35dd89b1c32a892c905849df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57d59d975-pdszg" podUID="f4f95e6f-b466-498b-a081-ef0cabb35977" Jul 7 00:17:46.678129 kubelet[2782]: E0707 00:17:46.677853 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3273a295106004f32a5e50052d48b525f6321a1dc747f72ff077f4b1c21b6662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.679166 kubelet[2782]: E0707 00:17:46.677908 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3273a295106004f32a5e50052d48b525f6321a1dc747f72ff077f4b1c21b6662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:46.683278 containerd[1580]: time="2025-07-07T00:17:46.680672301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5965bf9-q9fmf,Uid:cbad238c-2609-4954-946f-f27aba4121fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c480920237b3b294f6bf6209e341eb1b43916731522429b9ac70e38fa4891f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.683278 containerd[1580]: time="2025-07-07T00:17:46.678976352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srsnf,Uid:e1346988-cc35-4eb9-a474-b85af67f6104,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6390d70df2750905cc0b0ab972b01eb8b360f2a6579acfdcf8afea2d930e92af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.683509 kubelet[2782]: E0707 00:17:46.679115 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3273a295106004f32a5e50052d48b525f6321a1dc747f72ff077f4b1c21b6662\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xf2tr" Jul 7 00:17:46.683800 kubelet[2782]: E0707 00:17:46.683649 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xf2tr_calico-system(a6155bdc-cda1-4bd4-8088-60a9cf521c10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xf2tr_calico-system(a6155bdc-cda1-4bd4-8088-60a9cf521c10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3273a295106004f32a5e50052d48b525f6321a1dc747f72ff077f4b1c21b6662\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xf2tr" podUID="a6155bdc-cda1-4bd4-8088-60a9cf521c10" Jul 7 00:17:46.683800 kubelet[2782]: E0707 00:17:46.681940 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c480920237b3b294f6bf6209e341eb1b43916731522429b9ac70e38fa4891f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.683800 kubelet[2782]: E0707 00:17:46.683759 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c480920237b3b294f6bf6209e341eb1b43916731522429b9ac70e38fa4891f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5965bf9-q9fmf" Jul 7 00:17:46.686567 kubelet[2782]: E0707 00:17:46.684082 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c480920237b3b294f6bf6209e341eb1b43916731522429b9ac70e38fa4891f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5965bf9-q9fmf" Jul 7 00:17:46.686567 kubelet[2782]: E0707 00:17:46.684643 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac031daccd4b2bc72271faa4461ebb02fb747add7514653f420304f4dfe787b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.686567 kubelet[2782]: E0707 00:17:46.684690 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac031daccd4b2bc72271faa4461ebb02fb747add7514653f420304f4dfe787b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f45f85b7-gq49t" Jul 7 00:17:46.686567 kubelet[2782]: E0707 00:17:46.684718 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac031daccd4b2bc72271faa4461ebb02fb747add7514653f420304f4dfe787b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f45f85b7-gq49t" Jul 7 00:17:46.686848 containerd[1580]: time="2025-07-07T00:17:46.684403292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-gq49t,Uid:c378be6c-72b3-4024-8c67-72fb1764e8ac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ac031daccd4b2bc72271faa4461ebb02fb747add7514653f420304f4dfe787b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.686947 kubelet[2782]: E0707 00:17:46.684769 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f45f85b7-gq49t_calico-apiserver(c378be6c-72b3-4024-8c67-72fb1764e8ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f45f85b7-gq49t_calico-apiserver(c378be6c-72b3-4024-8c67-72fb1764e8ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ac031daccd4b2bc72271faa4461ebb02fb747add7514653f420304f4dfe787b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f45f85b7-gq49t" podUID="c378be6c-72b3-4024-8c67-72fb1764e8ac" Jul 7 00:17:46.686947 kubelet[2782]: E0707 00:17:46.686494 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f5965bf9-q9fmf_calico-system(cbad238c-2609-4954-946f-f27aba4121fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f5965bf9-q9fmf_calico-system(cbad238c-2609-4954-946f-f27aba4121fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c480920237b3b294f6bf6209e341eb1b43916731522429b9ac70e38fa4891f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f5965bf9-q9fmf" podUID="cbad238c-2609-4954-946f-f27aba4121fb" Jul 7 00:17:46.690299 kubelet[2782]: E0707 00:17:46.687376 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6390d70df2750905cc0b0ab972b01eb8b360f2a6579acfdcf8afea2d930e92af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.690299 kubelet[2782]: E0707 00:17:46.687433 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6390d70df2750905cc0b0ab972b01eb8b360f2a6579acfdcf8afea2d930e92af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-srsnf" Jul 7 00:17:46.690299 kubelet[2782]: E0707 00:17:46.687458 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6390d70df2750905cc0b0ab972b01eb8b360f2a6579acfdcf8afea2d930e92af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-srsnf" Jul 7 00:17:46.688972 systemd[1]: run-netns-cni\x2da186ab5d\x2da7a1\x2dfe39\x2d6617\x2d17ed36b6e9e3.mount: Deactivated successfully. Jul 7 00:17:46.690835 kubelet[2782]: E0707 00:17:46.687509 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-srsnf_kube-system(e1346988-cc35-4eb9-a474-b85af67f6104)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-srsnf_kube-system(e1346988-cc35-4eb9-a474-b85af67f6104)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6390d70df2750905cc0b0ab972b01eb8b360f2a6579acfdcf8afea2d930e92af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-srsnf" podUID="e1346988-cc35-4eb9-a474-b85af67f6104" Jul 7 00:17:46.689149 systemd[1]: run-netns-cni\x2d47d677d7\x2d28d1\x2dc312\x2ddf9b\x2de532c64349d8.mount: Deactivated successfully. Jul 7 00:17:46.689297 systemd[1]: run-netns-cni\x2dac59242d\x2d1c02\x2d9b69\x2d89df\x2d7bac6f860f4d.mount: Deactivated successfully. Jul 7 00:17:46.689416 systemd[1]: run-netns-cni\x2d452707a7\x2db992\x2d8829\x2db378\x2dda418429fb25.mount: Deactivated successfully. Jul 7 00:17:46.689518 systemd[1]: run-netns-cni\x2dd17ebc7b\x2d26d8\x2d1685\x2daf49\x2d8e3bec86d525.mount: Deactivated successfully. Jul 7 00:17:46.689619 systemd[1]: run-netns-cni\x2dba214aef\x2ddbbe\x2dd4b0\x2ded61\x2d1e610b6f6c7f.mount: Deactivated successfully. Jul 7 00:17:46.696491 containerd[1580]: time="2025-07-07T00:17:46.691439486Z" level=error msg="Failed to destroy network for sandbox \"e658b6b540898064388521fd4ce9fc57082b71e0d06c6a09f95f8df3adfc7e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.689711 systemd[1]: run-netns-cni\x2d60cff27e\x2d80a0\x2da29f\x2d5009\x2dcdf01a5f9285.mount: Deactivated successfully. Jul 7 00:17:46.698694 containerd[1580]: time="2025-07-07T00:17:46.698561190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-4c6tq,Uid:fc888973-7a8d-44bc-afa1-d07672f1bdb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658b6b540898064388521fd4ce9fc57082b71e0d06c6a09f95f8df3adfc7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.699723 kubelet[2782]: E0707 00:17:46.698906 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658b6b540898064388521fd4ce9fc57082b71e0d06c6a09f95f8df3adfc7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:17:46.699723 kubelet[2782]: E0707 00:17:46.699294 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658b6b540898064388521fd4ce9fc57082b71e0d06c6a09f95f8df3adfc7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f45f85b7-4c6tq" Jul 7 00:17:46.699723 kubelet[2782]: E0707 00:17:46.699644 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e658b6b540898064388521fd4ce9fc57082b71e0d06c6a09f95f8df3adfc7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f45f85b7-4c6tq" Jul 7 00:17:46.700660 kubelet[2782]: E0707 00:17:46.699933 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f45f85b7-4c6tq_calico-apiserver(fc888973-7a8d-44bc-afa1-d07672f1bdb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f45f85b7-4c6tq_calico-apiserver(fc888973-7a8d-44bc-afa1-d07672f1bdb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e658b6b540898064388521fd4ce9fc57082b71e0d06c6a09f95f8df3adfc7e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f45f85b7-4c6tq" podUID="fc888973-7a8d-44bc-afa1-d07672f1bdb4" Jul 7 00:17:46.706087 systemd[1]: run-netns-cni\x2d003edf84\x2d9965\x2d6cfc\x2d704e\x2d1fa8221c10e7.mount: Deactivated successfully. Jul 7 00:17:52.959561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233990314.mount: Deactivated successfully. Jul 7 00:17:52.988536 containerd[1580]: time="2025-07-07T00:17:52.988448520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:52.989837 containerd[1580]: time="2025-07-07T00:17:52.989778116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:17:52.991415 containerd[1580]: time="2025-07-07T00:17:52.991339834Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:52.994374 containerd[1580]: time="2025-07-07T00:17:52.994305495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:52.995062 containerd[1580]: time="2025-07-07T00:17:52.995018158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.650927113s" Jul 7 00:17:52.995341 containerd[1580]: time="2025-07-07T00:17:52.995196516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:17:53.020752 containerd[1580]: time="2025-07-07T00:17:53.020696696Z" level=info msg="CreateContainer within sandbox \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:17:53.032459 containerd[1580]: time="2025-07-07T00:17:53.032400972Z" level=info msg="Container 15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:53.050719 containerd[1580]: time="2025-07-07T00:17:53.050651080Z" level=info msg="CreateContainer within sandbox \"422cc6b0a06083f40dbeb4637bd86e7f3a221ac2ff2ab5c1ed6bba6cd6edb6c0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\"" Jul 7 00:17:53.051514 containerd[1580]: time="2025-07-07T00:17:53.051434782Z" level=info msg="StartContainer for \"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\"" Jul 7 00:17:53.053741 containerd[1580]: time="2025-07-07T00:17:53.053700827Z" level=info msg="connecting to shim 15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747" address="unix:///run/containerd/s/ff5c97520314bcde03e0e33e82d03d03cccb417e91a126c1fa4a6f2c91b1544c" protocol=ttrpc version=3 Jul 7 00:17:53.089458 systemd[1]: Started cri-containerd-15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747.scope - libcontainer container 15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747. Jul 7 00:17:53.153521 containerd[1580]: time="2025-07-07T00:17:53.153420034Z" level=info msg="StartContainer for \"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\" returns successfully" Jul 7 00:17:53.283400 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:17:53.283594 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:17:53.396043 kubelet[2782]: I0707 00:17:53.395776 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vp26h" podStartSLOduration=1.15883182 podStartE2EDuration="18.395727369s" podCreationTimestamp="2025-07-07 00:17:35 +0000 UTC" firstStartedPulling="2025-07-07 00:17:35.759386148 +0000 UTC m=+21.862445545" lastFinishedPulling="2025-07-07 00:17:52.996281686 +0000 UTC m=+39.099341094" observedRunningTime="2025-07-07 00:17:53.390817742 +0000 UTC m=+39.493877159" watchObservedRunningTime="2025-07-07 00:17:53.395727369 +0000 UTC m=+39.498786786" Jul 7 00:17:53.559748 kubelet[2782]: I0707 00:17:53.559692 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbad238c-2609-4954-946f-f27aba4121fb-whisker-ca-bundle\") pod \"cbad238c-2609-4954-946f-f27aba4121fb\" (UID: \"cbad238c-2609-4954-946f-f27aba4121fb\") " Jul 7 00:17:53.559920 kubelet[2782]: I0707 00:17:53.559772 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffhtr\" (UniqueName: \"kubernetes.io/projected/cbad238c-2609-4954-946f-f27aba4121fb-kube-api-access-ffhtr\") pod \"cbad238c-2609-4954-946f-f27aba4121fb\" (UID: \"cbad238c-2609-4954-946f-f27aba4121fb\") " Jul 7 00:17:53.559920 kubelet[2782]: I0707 00:17:53.559802 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbad238c-2609-4954-946f-f27aba4121fb-whisker-backend-key-pair\") pod \"cbad238c-2609-4954-946f-f27aba4121fb\" (UID: \"cbad238c-2609-4954-946f-f27aba4121fb\") " Jul 7 00:17:53.563278 kubelet[2782]: I0707 00:17:53.560859 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbad238c-2609-4954-946f-f27aba4121fb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cbad238c-2609-4954-946f-f27aba4121fb" (UID: "cbad238c-2609-4954-946f-f27aba4121fb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:17:53.566087 kubelet[2782]: I0707 00:17:53.566034 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbad238c-2609-4954-946f-f27aba4121fb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cbad238c-2609-4954-946f-f27aba4121fb" (UID: "cbad238c-2609-4954-946f-f27aba4121fb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:17:53.569763 kubelet[2782]: I0707 00:17:53.569708 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbad238c-2609-4954-946f-f27aba4121fb-kube-api-access-ffhtr" (OuterVolumeSpecName: "kube-api-access-ffhtr") pod "cbad238c-2609-4954-946f-f27aba4121fb" (UID: "cbad238c-2609-4954-946f-f27aba4121fb"). InnerVolumeSpecName "kube-api-access-ffhtr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:17:53.591079 containerd[1580]: time="2025-07-07T00:17:53.591015127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\" id:\"d57ea8c49c5a1d7d427e9b0b2f719bf0a3a2a59373a488bc343765d5ee466c4c\" pid:3850 exit_status:1 exited_at:{seconds:1751847473 nanos:590338771}" Jul 7 00:17:53.661691 kubelet[2782]: I0707 00:17:53.661507 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbad238c-2609-4954-946f-f27aba4121fb-whisker-ca-bundle\") on node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:17:53.661691 kubelet[2782]: I0707 00:17:53.661555 2782 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ffhtr\" (UniqueName: \"kubernetes.io/projected/cbad238c-2609-4954-946f-f27aba4121fb-kube-api-access-ffhtr\") on node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:17:53.661691 kubelet[2782]: I0707 00:17:53.661574 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbad238c-2609-4954-946f-f27aba4121fb-whisker-backend-key-pair\") on node \"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal\" DevicePath \"\"" Jul 7 00:17:53.956008 systemd[1]: var-lib-kubelet-pods-cbad238c\x2d2609\x2d4954\x2d946f\x2df27aba4121fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dffhtr.mount: Deactivated successfully. Jul 7 00:17:53.956190 systemd[1]: var-lib-kubelet-pods-cbad238c\x2d2609\x2d4954\x2d946f\x2df27aba4121fb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:17:54.153146 systemd[1]: Removed slice kubepods-besteffort-podcbad238c_2609_4954_946f_f27aba4121fb.slice - libcontainer container kubepods-besteffort-podcbad238c_2609_4954_946f_f27aba4121fb.slice. Jul 7 00:17:54.491017 systemd[1]: Created slice kubepods-besteffort-pod5ae9841c_fb18_48b9_b12b_fd709644729f.slice - libcontainer container kubepods-besteffort-pod5ae9841c_fb18_48b9_b12b_fd709644729f.slice. Jul 7 00:17:54.570806 kubelet[2782]: I0707 00:17:54.570722 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ae9841c-fb18-48b9-b12b-fd709644729f-whisker-backend-key-pair\") pod \"whisker-8446cb4b77-9rx5n\" (UID: \"5ae9841c-fb18-48b9-b12b-fd709644729f\") " pod="calico-system/whisker-8446cb4b77-9rx5n" Jul 7 00:17:54.570806 kubelet[2782]: I0707 00:17:54.570814 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnjn\" (UniqueName: \"kubernetes.io/projected/5ae9841c-fb18-48b9-b12b-fd709644729f-kube-api-access-hlnjn\") pod \"whisker-8446cb4b77-9rx5n\" (UID: \"5ae9841c-fb18-48b9-b12b-fd709644729f\") " pod="calico-system/whisker-8446cb4b77-9rx5n" Jul 7 00:17:54.571756 kubelet[2782]: I0707 00:17:54.570852 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae9841c-fb18-48b9-b12b-fd709644729f-whisker-ca-bundle\") pod \"whisker-8446cb4b77-9rx5n\" (UID: \"5ae9841c-fb18-48b9-b12b-fd709644729f\") " pod="calico-system/whisker-8446cb4b77-9rx5n" Jul 7 00:17:54.589864 containerd[1580]: time="2025-07-07T00:17:54.589777824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\" id:\"ab177e9963bee0ca55e3d15ce656ffca1ec7435b715dcdd5757013338e2d84ca\" pid:3895 exit_status:1 exited_at:{seconds:1751847474 nanos:589091472}" Jul 7 00:17:54.801268 containerd[1580]: time="2025-07-07T00:17:54.801143379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8446cb4b77-9rx5n,Uid:5ae9841c-fb18-48b9-b12b-fd709644729f,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:55.058703 systemd-networkd[1483]: calicc8e4b8a0e7: Link UP Jul 7 00:17:55.060599 systemd-networkd[1483]: calicc8e4b8a0e7: Gained carrier Jul 7 00:17:55.092579 containerd[1580]: 2025-07-07 00:17:54.881 [INFO][3921] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:17:55.092579 containerd[1580]: 2025-07-07 00:17:54.907 [INFO][3921] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0 whisker-8446cb4b77- calico-system 5ae9841c-fb18-48b9-b12b-fd709644729f 906 0 2025-07-07 00:17:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8446cb4b77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal whisker-8446cb4b77-9rx5n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicc8e4b8a0e7 [] [] }} ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-" Jul 7 00:17:55.092579 containerd[1580]: 2025-07-07 00:17:54.907 [INFO][3921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.092579 containerd[1580]: 2025-07-07 00:17:54.965 [INFO][3965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" HandleID="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.965 [INFO][3965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" HandleID="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"whisker-8446cb4b77-9rx5n", "timestamp":"2025-07-07 00:17:54.965328095 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.966 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.966 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.966 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.977 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.987 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.996 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.092963 containerd[1580]: 2025-07-07 00:17:54.998 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.003 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.003 [INFO][3965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.007 [INFO][3965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5 Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.017 [INFO][3965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.031 [INFO][3965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.193/26] block=192.168.119.192/26 handle="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.031 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.193/26] handle="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.031 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:17:55.094933 containerd[1580]: 2025-07-07 00:17:55.031 [INFO][3965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.193/26] IPv6=[] ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" HandleID="k8s-pod-network.9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.096447 containerd[1580]: 2025-07-07 00:17:55.042 [INFO][3921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0", GenerateName:"whisker-8446cb4b77-", Namespace:"calico-system", SelfLink:"", UID:"5ae9841c-fb18-48b9-b12b-fd709644729f", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8446cb4b77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-8446cb4b77-9rx5n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc8e4b8a0e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:55.096592 containerd[1580]: 2025-07-07 00:17:55.043 [INFO][3921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.193/32] ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.096592 containerd[1580]: 2025-07-07 00:17:55.043 [INFO][3921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc8e4b8a0e7 ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.096592 containerd[1580]: 2025-07-07 00:17:55.063 [INFO][3921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.096735 containerd[1580]: 2025-07-07 00:17:55.065 [INFO][3921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0", GenerateName:"whisker-8446cb4b77-", Namespace:"calico-system", SelfLink:"", UID:"5ae9841c-fb18-48b9-b12b-fd709644729f", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8446cb4b77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5", Pod:"whisker-8446cb4b77-9rx5n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicc8e4b8a0e7", MAC:"da:84:b1:81:3d:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:55.096855 containerd[1580]: 2025-07-07 00:17:55.089 [INFO][3921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" Namespace="calico-system" Pod="whisker-8446cb4b77-9rx5n" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-whisker--8446cb4b77--9rx5n-eth0" Jul 7 00:17:55.146497 containerd[1580]: time="2025-07-07T00:17:55.145565040Z" level=info msg="connecting to shim 9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5" address="unix:///run/containerd/s/b8b61484694e79b9b751e957ead5f1068f750cfe46c2f60d6dc78fae52e75fb6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:55.211278 systemd[1]: Started cri-containerd-9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5.scope - libcontainer container 9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5. Jul 7 00:17:55.376184 containerd[1580]: time="2025-07-07T00:17:55.374940212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8446cb4b77-9rx5n,Uid:5ae9841c-fb18-48b9-b12b-fd709644729f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5\"" Jul 7 00:17:55.380956 containerd[1580]: time="2025-07-07T00:17:55.380923695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:17:56.148133 kubelet[2782]: I0707 00:17:56.148064 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbad238c-2609-4954-946f-f27aba4121fb" path="/var/lib/kubelet/pods/cbad238c-2609-4954-946f-f27aba4121fb/volumes" Jul 7 00:17:56.297661 containerd[1580]: time="2025-07-07T00:17:56.297578268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:56.298983 containerd[1580]: time="2025-07-07T00:17:56.298921717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:17:56.300522 containerd[1580]: time="2025-07-07T00:17:56.300446140Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:56.303741 containerd[1580]: time="2025-07-07T00:17:56.303664719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:56.304818 containerd[1580]: time="2025-07-07T00:17:56.304626052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 923.437339ms" Jul 7 00:17:56.304818 containerd[1580]: time="2025-07-07T00:17:56.304682731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:17:56.308326 containerd[1580]: time="2025-07-07T00:17:56.308268840Z" level=info msg="CreateContainer within sandbox \"9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:17:56.320284 containerd[1580]: time="2025-07-07T00:17:56.318786691Z" level=info msg="Container 00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:56.334076 containerd[1580]: time="2025-07-07T00:17:56.333999770Z" level=info msg="CreateContainer within sandbox \"9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c\"" Jul 7 00:17:56.334753 containerd[1580]: time="2025-07-07T00:17:56.334693117Z" level=info msg="StartContainer for \"00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c\"" Jul 7 00:17:56.336902 containerd[1580]: time="2025-07-07T00:17:56.336838050Z" level=info msg="connecting to shim 00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c" address="unix:///run/containerd/s/b8b61484694e79b9b751e957ead5f1068f750cfe46c2f60d6dc78fae52e75fb6" protocol=ttrpc version=3 Jul 7 00:17:56.360909 systemd-networkd[1483]: calicc8e4b8a0e7: Gained IPv6LL Jul 7 00:17:56.366503 systemd[1]: Started cri-containerd-00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c.scope - libcontainer container 00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c. Jul 7 00:17:56.461582 containerd[1580]: time="2025-07-07T00:17:56.461342157Z" level=info msg="StartContainer for \"00d7b449d1e3c1407c2918673dbf0a3f27f862c1f4a1d6c2eee239001144388c\" returns successfully" Jul 7 00:17:56.464594 containerd[1580]: time="2025-07-07T00:17:56.464232051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:17:57.151302 containerd[1580]: time="2025-07-07T00:17:57.151223461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srsnf,Uid:e1346988-cc35-4eb9-a474-b85af67f6104,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:57.342801 systemd-networkd[1483]: calic85fbffe89d: Link UP Jul 7 00:17:57.345082 systemd-networkd[1483]: calic85fbffe89d: Gained carrier Jul 7 00:17:57.383420 containerd[1580]: 2025-07-07 00:17:57.195 [INFO][4134] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:17:57.383420 containerd[1580]: 2025-07-07 00:17:57.213 [INFO][4134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0 coredns-668d6bf9bc- kube-system e1346988-cc35-4eb9-a474-b85af67f6104 837 0 2025-07-07 00:17:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal coredns-668d6bf9bc-srsnf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic85fbffe89d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-" Jul 7 00:17:57.383420 containerd[1580]: 2025-07-07 00:17:57.213 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.383420 containerd[1580]: 2025-07-07 00:17:57.265 [INFO][4146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" HandleID="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.265 [INFO][4146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" HandleID="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5f70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-srsnf", "timestamp":"2025-07-07 00:17:57.265160567 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.265 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.265 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.266 [INFO][4146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.277 [INFO][4146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.285 [INFO][4146] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.293 [INFO][4146] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385175 containerd[1580]: 2025-07-07 00:17:57.298 [INFO][4146] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.303 [INFO][4146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.303 [INFO][4146] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.306 [INFO][4146] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.313 [INFO][4146] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.327 [INFO][4146] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.194/26] block=192.168.119.192/26 handle="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.327 [INFO][4146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.194/26] handle="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.327 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:17:57.385618 containerd[1580]: 2025-07-07 00:17:57.327 [INFO][4146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.194/26] IPv6=[] ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" HandleID="k8s-pod-network.c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.386036 containerd[1580]: 2025-07-07 00:17:57.334 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1346988-cc35-4eb9-a474-b85af67f6104", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-srsnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic85fbffe89d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:57.386036 containerd[1580]: 2025-07-07 00:17:57.334 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.194/32] ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.386036 containerd[1580]: 2025-07-07 00:17:57.335 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic85fbffe89d ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.386036 containerd[1580]: 2025-07-07 00:17:57.348 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.386036 containerd[1580]: 2025-07-07 00:17:57.350 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1346988-cc35-4eb9-a474-b85af67f6104", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b", Pod:"coredns-668d6bf9bc-srsnf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic85fbffe89d", MAC:"d6:16:3d:5a:32:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:57.386036 containerd[1580]: 2025-07-07 00:17:57.375 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" Namespace="kube-system" Pod="coredns-668d6bf9bc-srsnf" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--srsnf-eth0" Jul 7 00:17:57.440370 containerd[1580]: time="2025-07-07T00:17:57.439861086Z" level=info msg="connecting to shim c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b" address="unix:///run/containerd/s/02ce983534d59601efc482ede6f3feb1d82ee7e9ab3410d669550db14421f391" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:57.489600 systemd[1]: Started cri-containerd-c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b.scope - libcontainer container c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b. Jul 7 00:17:57.603871 containerd[1580]: time="2025-07-07T00:17:57.603796742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srsnf,Uid:e1346988-cc35-4eb9-a474-b85af67f6104,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b\"" Jul 7 00:17:57.615553 containerd[1580]: time="2025-07-07T00:17:57.615476159Z" level=info msg="CreateContainer within sandbox \"c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:17:57.650912 containerd[1580]: time="2025-07-07T00:17:57.650791310Z" level=info msg="Container 9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:57.683162 containerd[1580]: time="2025-07-07T00:17:57.683100762Z" level=info msg="CreateContainer within sandbox \"c2d7943b92030006c07acf2859bc79952acde7a9f3f43e46525c2e092fc6456b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421\"" Jul 7 00:17:57.685472 containerd[1580]: time="2025-07-07T00:17:57.685429042Z" level=info msg="StartContainer for \"9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421\"" Jul 7 00:17:57.692717 containerd[1580]: time="2025-07-07T00:17:57.692398966Z" level=info msg="connecting to shim 9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421" address="unix:///run/containerd/s/02ce983534d59601efc482ede6f3feb1d82ee7e9ab3410d669550db14421f391" protocol=ttrpc version=3 Jul 7 00:17:57.779825 systemd[1]: Started cri-containerd-9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421.scope - libcontainer container 9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421. Jul 7 00:17:57.944351 containerd[1580]: time="2025-07-07T00:17:57.944145340Z" level=info msg="StartContainer for \"9878583a9deb79bb63802cfea917f1c1c6293720336cab88a1287a60e33da421\" returns successfully" Jul 7 00:17:58.148783 containerd[1580]: time="2025-07-07T00:17:58.148282340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjkxw,Uid:2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd,Namespace:kube-system,Attempt:0,}" Jul 7 00:17:58.380254 systemd-networkd[1483]: cali2f62ce11215: Link UP Jul 7 00:17:58.383484 systemd-networkd[1483]: cali2f62ce11215: Gained carrier Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.207 [INFO][4260] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.236 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0 coredns-668d6bf9bc- kube-system 2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd 833 0 2025-07-07 00:17:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal coredns-668d6bf9bc-fjkxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f62ce11215 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.237 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.296 [INFO][4273] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" HandleID="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.297 [INFO][4273] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" HandleID="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-fjkxw", "timestamp":"2025-07-07 00:17:58.296655241 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.297 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.297 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.297 [INFO][4273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.311 [INFO][4273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.319 [INFO][4273] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.326 [INFO][4273] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.330 [INFO][4273] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.336 [INFO][4273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.336 [INFO][4273] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.338 [INFO][4273] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71 Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.347 [INFO][4273] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.362 [INFO][4273] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.195/26] block=192.168.119.192/26 handle="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.363 [INFO][4273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.195/26] handle="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.363 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:17:58.434767 containerd[1580]: 2025-07-07 00:17:58.363 [INFO][4273] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.195/26] IPv6=[] ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" HandleID="k8s-pod-network.a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.442102 containerd[1580]: 2025-07-07 00:17:58.369 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-fjkxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f62ce11215", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:58.442102 containerd[1580]: 2025-07-07 00:17:58.370 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.195/32] ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.442102 containerd[1580]: 2025-07-07 00:17:58.370 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f62ce11215 ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.442102 containerd[1580]: 2025-07-07 00:17:58.386 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.442102 containerd[1580]: 2025-07-07 00:17:58.389 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71", Pod:"coredns-668d6bf9bc-fjkxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f62ce11215", MAC:"4e:ad:55:fd:6d:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:58.442102 containerd[1580]: 2025-07-07 00:17:58.416 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjkxw" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--fjkxw-eth0" Jul 7 00:17:58.475235 kubelet[2782]: I0707 00:17:58.475150 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-srsnf" podStartSLOduration=39.475118957 podStartE2EDuration="39.475118957s" podCreationTimestamp="2025-07-07 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:58.440064086 +0000 UTC m=+44.543123507" watchObservedRunningTime="2025-07-07 00:17:58.475118957 +0000 UTC m=+44.578178375" Jul 7 00:17:58.543970 containerd[1580]: time="2025-07-07T00:17:58.543906224Z" level=info msg="connecting to shim a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71" address="unix:///run/containerd/s/baf33e57d2e8baaa5cd21484999de7ac1c49f4b3e7bfb9c77c3a36f7b82d46a5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:58.630105 systemd[1]: Started cri-containerd-a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71.scope - libcontainer container a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71. Jul 7 00:17:58.754609 containerd[1580]: time="2025-07-07T00:17:58.754491075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjkxw,Uid:2e6e6cc9-ae3e-4fd0-b9f4-9a18bdb005fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71\"" Jul 7 00:17:58.762730 containerd[1580]: time="2025-07-07T00:17:58.762686187Z" level=info msg="CreateContainer within sandbox \"a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:17:58.786275 containerd[1580]: time="2025-07-07T00:17:58.784924765Z" level=info msg="Container 83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:58.792961 systemd-networkd[1483]: calic85fbffe89d: Gained IPv6LL Jul 7 00:17:58.801528 containerd[1580]: time="2025-07-07T00:17:58.801383565Z" level=info msg="CreateContainer within sandbox \"a55eb0aa1e651bb21f9aecc823efd68e1f1aefbaf3483f020bc788cccb565a71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173\"" Jul 7 00:17:58.803909 containerd[1580]: time="2025-07-07T00:17:58.803600209Z" level=info msg="StartContainer for \"83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173\"" Jul 7 00:17:58.808161 containerd[1580]: time="2025-07-07T00:17:58.807840401Z" level=info msg="connecting to shim 83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173" address="unix:///run/containerd/s/baf33e57d2e8baaa5cd21484999de7ac1c49f4b3e7bfb9c77c3a36f7b82d46a5" protocol=ttrpc version=3 Jul 7 00:17:58.846673 systemd[1]: Started cri-containerd-83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173.scope - libcontainer container 83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173. Jul 7 00:17:58.941568 containerd[1580]: time="2025-07-07T00:17:58.941497471Z" level=info msg="StartContainer for \"83b9a963b2a8f0148d2ea5be2d63e9837181e99d72751c9df091dbde83ca0173\" returns successfully" Jul 7 00:17:59.146841 containerd[1580]: time="2025-07-07T00:17:59.146586482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q26kq,Uid:bed4cce4-6ae5-4b99-ae24-8406be485d96,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:59.158469 containerd[1580]: time="2025-07-07T00:17:59.158191320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xf2tr,Uid:a6155bdc-cda1-4bd4-8088-60a9cf521c10,Namespace:calico-system,Attempt:0,}" Jul 7 00:17:59.549143 kubelet[2782]: I0707 00:17:59.549060 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fjkxw" podStartSLOduration=40.549032619 podStartE2EDuration="40.549032619s" podCreationTimestamp="2025-07-07 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:17:59.476444468 +0000 UTC m=+45.579503897" watchObservedRunningTime="2025-07-07 00:17:59.549032619 +0000 UTC m=+45.652092039" Jul 7 00:17:59.717058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794569192.mount: Deactivated successfully. Jul 7 00:17:59.742304 containerd[1580]: time="2025-07-07T00:17:59.742007433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:59.744880 containerd[1580]: time="2025-07-07T00:17:59.744805641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:17:59.747479 containerd[1580]: time="2025-07-07T00:17:59.746144940Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:59.753276 containerd[1580]: time="2025-07-07T00:17:59.752766153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:59.754765 containerd[1580]: time="2025-07-07T00:17:59.754699284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.290393688s" Jul 7 00:17:59.754978 containerd[1580]: time="2025-07-07T00:17:59.754951258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:17:59.758082 containerd[1580]: time="2025-07-07T00:17:59.758045059Z" level=info msg="CreateContainer within sandbox \"9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:17:59.772501 containerd[1580]: time="2025-07-07T00:17:59.772446253Z" level=info msg="Container 1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:17:59.789150 containerd[1580]: time="2025-07-07T00:17:59.789099439Z" level=info msg="CreateContainer within sandbox \"9a657270c183a3a150ec9d4ad42fee9978f84ecded0e622b9baeb3130f269cb5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207\"" Jul 7 00:17:59.790398 containerd[1580]: time="2025-07-07T00:17:59.790308192Z" level=info msg="StartContainer for \"1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207\"" Jul 7 00:17:59.792632 containerd[1580]: time="2025-07-07T00:17:59.792573163Z" level=info msg="connecting to shim 1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207" address="unix:///run/containerd/s/b8b61484694e79b9b751e957ead5f1068f750cfe46c2f60d6dc78fae52e75fb6" protocol=ttrpc version=3 Jul 7 00:17:59.798441 systemd-networkd[1483]: cali9306c4be5eb: Link UP Jul 7 00:17:59.800463 systemd-networkd[1483]: cali9306c4be5eb: Gained carrier Jul 7 00:17:59.855175 systemd[1]: Started cri-containerd-1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207.scope - libcontainer container 1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207. Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.386 [INFO][4391] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.457 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0 csi-node-driver- calico-system a6155bdc-cda1-4bd4-8088-60a9cf521c10 729 0 2025-07-07 00:17:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal csi-node-driver-xf2tr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9306c4be5eb [] [] }} ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.458 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.660 [INFO][4415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" HandleID="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.661 [INFO][4415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" HandleID="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039dec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"csi-node-driver-xf2tr", "timestamp":"2025-07-07 00:17:59.660444685 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.664 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.664 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.664 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.693 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.703 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.716 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.722 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.727 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.727 [INFO][4415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.731 [INFO][4415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010 Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.742 [INFO][4415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.755 [INFO][4415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.196/26] block=192.168.119.192/26 handle="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.755 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.196/26] handle="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.756 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:17:59.880405 containerd[1580]: 2025-07-07 00:17:59.757 [INFO][4415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.196/26] IPv6=[] ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" HandleID="k8s-pod-network.a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.881620 containerd[1580]: 2025-07-07 00:17:59.769 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6155bdc-cda1-4bd4-8088-60a9cf521c10", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-xf2tr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9306c4be5eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:59.881620 containerd[1580]: 2025-07-07 00:17:59.772 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.196/32] ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.881620 containerd[1580]: 2025-07-07 00:17:59.772 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9306c4be5eb ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.881620 containerd[1580]: 2025-07-07 00:17:59.813 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.881620 containerd[1580]: 2025-07-07 00:17:59.817 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6155bdc-cda1-4bd4-8088-60a9cf521c10", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010", Pod:"csi-node-driver-xf2tr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9306c4be5eb", MAC:"3a:2e:58:51:e5:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:17:59.881620 containerd[1580]: 2025-07-07 00:17:59.859 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" Namespace="calico-system" Pod="csi-node-driver-xf2tr" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-csi--node--driver--xf2tr-eth0" Jul 7 00:17:59.949906 containerd[1580]: time="2025-07-07T00:17:59.949767097Z" level=info msg="connecting to shim a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010" address="unix:///run/containerd/s/2d35d818d1ccd86d38693c2c8e302a495741ac328f1df83e7b72e825b1f2c199" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:17:59.971214 systemd-networkd[1483]: calidbf423eda09: Link UP Jul 7 00:17:59.973460 systemd-networkd[1483]: calidbf423eda09: Gained carrier Jul 7 00:18:00.027131 systemd[1]: Started cri-containerd-a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010.scope - libcontainer container a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010. Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.345 [INFO][4384] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.436 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0 goldmane-768f4c5c69- calico-system bed4cce4-6ae5-4b99-ae24-8406be485d96 841 0 2025-07-07 00:17:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal goldmane-768f4c5c69-q26kq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidbf423eda09 [] [] }} ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.436 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.666 [INFO][4413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" HandleID="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.669 [INFO][4413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" HandleID="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041dae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"goldmane-768f4c5c69-q26kq", "timestamp":"2025-07-07 00:17:59.666585212 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.669 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.756 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.758 [INFO][4413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.813 [INFO][4413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.834 [INFO][4413] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.856 [INFO][4413] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.873 [INFO][4413] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.896 [INFO][4413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.896 [INFO][4413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.899 [INFO][4413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981 Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.918 [INFO][4413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.958 [INFO][4413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.197/26] block=192.168.119.192/26 handle="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.958 [INFO][4413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.197/26] handle="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.958 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:18:00.030496 containerd[1580]: 2025-07-07 00:17:59.958 [INFO][4413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.197/26] IPv6=[] ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" HandleID="k8s-pod-network.4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.032007 containerd[1580]: 2025-07-07 00:17:59.963 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bed4cce4-6ae5-4b99-ae24-8406be485d96", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-768f4c5c69-q26kq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbf423eda09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:00.032007 containerd[1580]: 2025-07-07 00:17:59.964 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.197/32] ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.032007 containerd[1580]: 2025-07-07 00:17:59.965 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbf423eda09 ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.032007 containerd[1580]: 2025-07-07 00:17:59.974 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.032007 containerd[1580]: 2025-07-07 00:17:59.975 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"bed4cce4-6ae5-4b99-ae24-8406be485d96", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981", Pod:"goldmane-768f4c5c69-q26kq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbf423eda09", MAC:"2e:53:74:93:d2:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:00.032007 containerd[1580]: 2025-07-07 00:18:00.012 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" Namespace="calico-system" Pod="goldmane-768f4c5c69-q26kq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-goldmane--768f4c5c69--q26kq-eth0" Jul 7 00:18:00.097361 containerd[1580]: time="2025-07-07T00:18:00.097164196Z" level=info msg="connecting to shim 4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981" address="unix:///run/containerd/s/31a599a103d1dd3f759310ea4bbb9f90d40f86d076162e90aacdc68ffc25db17" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:00.154929 containerd[1580]: time="2025-07-07T00:18:00.152881684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-gq49t,Uid:c378be6c-72b3-4024-8c67-72fb1764e8ac,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:18:00.169667 containerd[1580]: time="2025-07-07T00:18:00.169505006Z" level=info msg="StartContainer for \"1581d8fc22f086293dfb4cf65b9e97e35862220adf74339f9ca7378656793207\" returns successfully" Jul 7 00:18:00.206367 containerd[1580]: time="2025-07-07T00:18:00.206322878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xf2tr,Uid:a6155bdc-cda1-4bd4-8088-60a9cf521c10,Namespace:calico-system,Attempt:0,} returns sandbox id \"a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010\"" Jul 7 00:18:00.207769 systemd[1]: Started cri-containerd-4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981.scope - libcontainer container 4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981. Jul 7 00:18:00.216024 containerd[1580]: time="2025-07-07T00:18:00.215969609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:18:00.392705 systemd-networkd[1483]: cali2f62ce11215: Gained IPv6LL Jul 7 00:18:00.478940 systemd-networkd[1483]: calia7071391590: Link UP Jul 7 00:18:00.488292 systemd-networkd[1483]: calia7071391590: Gained carrier Jul 7 00:18:00.541822 kubelet[2782]: I0707 00:18:00.541019 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8446cb4b77-9rx5n" podStartSLOduration=2.164848643 podStartE2EDuration="6.540991609s" podCreationTimestamp="2025-07-07 00:17:54 +0000 UTC" firstStartedPulling="2025-07-07 00:17:55.379913679 +0000 UTC m=+41.482973087" lastFinishedPulling="2025-07-07 00:17:59.756056643 +0000 UTC m=+45.859116053" observedRunningTime="2025-07-07 00:18:00.536027926 +0000 UTC m=+46.639087344" watchObservedRunningTime="2025-07-07 00:18:00.540991609 +0000 UTC m=+46.644051026" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.261 [INFO][4547] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.286 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0 calico-apiserver-6f45f85b7- calico-apiserver c378be6c-72b3-4024-8c67-72fb1764e8ac 840 0 2025-07-07 00:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f45f85b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal calico-apiserver-6f45f85b7-gq49t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7071391590 [] [] }} ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.287 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.352 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" HandleID="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.353 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" HandleID="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"calico-apiserver-6f45f85b7-gq49t", "timestamp":"2025-07-07 00:18:00.352914254 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.353 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.354 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.354 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.369 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.376 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.382 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.385 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.388 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.388 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.398 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.409 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.435 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.198/26] block=192.168.119.192/26 handle="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.436 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.198/26] handle="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.438 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:18:00.545448 containerd[1580]: 2025-07-07 00:18:00.441 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.198/26] IPv6=[] ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" HandleID="k8s-pod-network.a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.546853 containerd[1580]: 2025-07-07 00:18:00.458 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0", GenerateName:"calico-apiserver-6f45f85b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c378be6c-72b3-4024-8c67-72fb1764e8ac", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f45f85b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6f45f85b7-gq49t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7071391590", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:00.546853 containerd[1580]: 2025-07-07 00:18:00.459 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.198/32] ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.546853 containerd[1580]: 2025-07-07 00:18:00.459 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7071391590 ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.546853 containerd[1580]: 2025-07-07 00:18:00.506 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.546853 containerd[1580]: 2025-07-07 00:18:00.507 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0", GenerateName:"calico-apiserver-6f45f85b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c378be6c-72b3-4024-8c67-72fb1764e8ac", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f45f85b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d", Pod:"calico-apiserver-6f45f85b7-gq49t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7071391590", MAC:"76:44:d8:45:8f:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:00.546853 containerd[1580]: 2025-07-07 00:18:00.529 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-gq49t" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--gq49t-eth0" Jul 7 00:18:00.580007 containerd[1580]: time="2025-07-07T00:18:00.579229990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q26kq,Uid:bed4cce4-6ae5-4b99-ae24-8406be485d96,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981\"" Jul 7 00:18:00.610101 containerd[1580]: time="2025-07-07T00:18:00.609986826Z" level=info msg="connecting to shim a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d" address="unix:///run/containerd/s/2cf2e6040e2f3a3192035a6ed4d1d0713f566b640ae903ec9f595205ea8a4277" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:00.675549 systemd[1]: Started cri-containerd-a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d.scope - libcontainer container a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d. Jul 7 00:18:00.795875 containerd[1580]: time="2025-07-07T00:18:00.795735475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-gq49t,Uid:c378be6c-72b3-4024-8c67-72fb1764e8ac,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d\"" Jul 7 00:18:01.096416 systemd-networkd[1483]: calidbf423eda09: Gained IPv6LL Jul 7 00:18:01.225110 systemd-networkd[1483]: cali9306c4be5eb: Gained IPv6LL Jul 7 00:18:01.386650 containerd[1580]: time="2025-07-07T00:18:01.386193617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:01.388376 containerd[1580]: time="2025-07-07T00:18:01.388324251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:18:01.389541 containerd[1580]: time="2025-07-07T00:18:01.389371389Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:01.394573 containerd[1580]: time="2025-07-07T00:18:01.394495259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:01.397352 containerd[1580]: time="2025-07-07T00:18:01.397304406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.181276039s" Jul 7 00:18:01.397611 containerd[1580]: time="2025-07-07T00:18:01.397527980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:18:01.401257 containerd[1580]: time="2025-07-07T00:18:01.400684893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:18:01.402498 containerd[1580]: time="2025-07-07T00:18:01.402462333Z" level=info msg="CreateContainer within sandbox \"a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:18:01.425401 containerd[1580]: time="2025-07-07T00:18:01.421881682Z" level=info msg="Container e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:01.445160 containerd[1580]: time="2025-07-07T00:18:01.445092446Z" level=info msg="CreateContainer within sandbox \"a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c\"" Jul 7 00:18:01.446678 containerd[1580]: time="2025-07-07T00:18:01.446628867Z" level=info msg="StartContainer for \"e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c\"" Jul 7 00:18:01.449611 containerd[1580]: time="2025-07-07T00:18:01.449565447Z" level=info msg="connecting to shim e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c" address="unix:///run/containerd/s/2d35d818d1ccd86d38693c2c8e302a495741ac328f1df83e7b72e825b1f2c199" protocol=ttrpc version=3 Jul 7 00:18:01.495276 systemd[1]: Started cri-containerd-e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c.scope - libcontainer container e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c. Jul 7 00:18:01.632478 containerd[1580]: time="2025-07-07T00:18:01.632310239Z" level=info msg="StartContainer for \"e8b8aa96b3e6749dd34662dcc5f363ba32775cfd0af8d2f70572bc9fe68db90c\" returns successfully" Jul 7 00:18:02.146619 containerd[1580]: time="2025-07-07T00:18:02.146541059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-4c6tq,Uid:fc888973-7a8d-44bc-afa1-d07672f1bdb4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:18:02.149381 containerd[1580]: time="2025-07-07T00:18:02.149331927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d59d975-pdszg,Uid:f4f95e6f-b466-498b-a081-ef0cabb35977,Namespace:calico-system,Attempt:0,}" Jul 7 00:18:02.313446 systemd-networkd[1483]: calia7071391590: Gained IPv6LL Jul 7 00:18:02.398199 systemd-networkd[1483]: calid57e4e2fd70: Link UP Jul 7 00:18:02.400964 systemd-networkd[1483]: calid57e4e2fd70: Gained carrier Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.248 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.276 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0 calico-kube-controllers-57d59d975- calico-system f4f95e6f-b466-498b-a081-ef0cabb35977 838 0 2025-07-07 00:17:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57d59d975 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal calico-kube-controllers-57d59d975-pdszg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid57e4e2fd70 [] [] }} ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.276 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.333 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" HandleID="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.334 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" HandleID="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"calico-kube-controllers-57d59d975-pdszg", "timestamp":"2025-07-07 00:18:02.333193204 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.334 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.334 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.334 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.347 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.353 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.360 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.362 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.366 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.366 [INFO][4744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.368 [INFO][4744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8 Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.376 [INFO][4744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.388 [INFO][4744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.199/26] block=192.168.119.192/26 handle="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.389 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.199/26] handle="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.389 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:18:02.425192 containerd[1580]: 2025-07-07 00:18:02.389 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.199/26] IPv6=[] ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" HandleID="k8s-pod-network.9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.427057 containerd[1580]: 2025-07-07 00:18:02.392 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0", GenerateName:"calico-kube-controllers-57d59d975-", Namespace:"calico-system", SelfLink:"", UID:"f4f95e6f-b466-498b-a081-ef0cabb35977", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d59d975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-57d59d975-pdszg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid57e4e2fd70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:02.427057 containerd[1580]: 2025-07-07 00:18:02.393 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.199/32] ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.427057 containerd[1580]: 2025-07-07 00:18:02.393 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid57e4e2fd70 ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.427057 containerd[1580]: 2025-07-07 00:18:02.400 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.427057 containerd[1580]: 2025-07-07 00:18:02.401 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0", GenerateName:"calico-kube-controllers-57d59d975-", Namespace:"calico-system", SelfLink:"", UID:"f4f95e6f-b466-498b-a081-ef0cabb35977", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57d59d975", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8", Pod:"calico-kube-controllers-57d59d975-pdszg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid57e4e2fd70", MAC:"1e:7c:36:0e:71:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:02.427057 containerd[1580]: 2025-07-07 00:18:02.420 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" Namespace="calico-system" Pod="calico-kube-controllers-57d59d975-pdszg" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--kube--controllers--57d59d975--pdszg-eth0" Jul 7 00:18:02.479698 containerd[1580]: time="2025-07-07T00:18:02.478671283Z" level=info msg="connecting to shim 9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8" address="unix:///run/containerd/s/16ea7a106b68c87625e85f602ba941a81f43f8e48137af1f00c66a448a123a71" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:02.527881 systemd[1]: Started cri-containerd-9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8.scope - libcontainer container 9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8. Jul 7 00:18:02.591045 systemd-networkd[1483]: cali8513e478d5c: Link UP Jul 7 00:18:02.594807 systemd-networkd[1483]: cali8513e478d5c: Gained carrier Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.256 [INFO][4719] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.282 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0 calico-apiserver-6f45f85b7- calico-apiserver fc888973-7a8d-44bc-afa1-d07672f1bdb4 839 0 2025-07-07 00:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f45f85b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal calico-apiserver-6f45f85b7-4c6tq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8513e478d5c [] [] }} ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.282 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.342 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" HandleID="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.344 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" HandleID="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d54e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", "pod":"calico-apiserver-6f45f85b7-4c6tq", "timestamp":"2025-07-07 00:18:02.342901187 +0000 UTC"}, Hostname:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.344 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.389 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.389 [INFO][4749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal' Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.452 [INFO][4749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.466 [INFO][4749] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.477 [INFO][4749] ipam/ipam.go 511: Trying affinity for 192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.483 [INFO][4749] ipam/ipam.go 158: Attempting to load block cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.489 [INFO][4749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.489 [INFO][4749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.510 [INFO][4749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230 Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.527 [INFO][4749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.558 [INFO][4749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.119.200/26] block=192.168.119.192/26 handle="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.559 [INFO][4749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.119.200/26] handle="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" host="ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal" Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.561 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:18:02.652364 containerd[1580]: 2025-07-07 00:18:02.561 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.119.200/26] IPv6=[] ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" HandleID="k8s-pod-network.335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Workload="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.656736 containerd[1580]: 2025-07-07 00:18:02.579 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0", GenerateName:"calico-apiserver-6f45f85b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc888973-7a8d-44bc-afa1-d07672f1bdb4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f45f85b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6f45f85b7-4c6tq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8513e478d5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:02.656736 containerd[1580]: 2025-07-07 00:18:02.580 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.200/32] ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.656736 containerd[1580]: 2025-07-07 00:18:02.580 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8513e478d5c ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.656736 containerd[1580]: 2025-07-07 00:18:02.596 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.656736 containerd[1580]: 2025-07-07 00:18:02.598 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0", GenerateName:"calico-apiserver-6f45f85b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc888973-7a8d-44bc-afa1-d07672f1bdb4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f45f85b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344-1-1-674877a9a3dd0f552365.c.flatcar-212911.internal", ContainerID:"335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230", Pod:"calico-apiserver-6f45f85b7-4c6tq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8513e478d5c", MAC:"76:5a:b7:26:05:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:18:02.656736 containerd[1580]: 2025-07-07 00:18:02.624 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" Namespace="calico-apiserver" Pod="calico-apiserver-6f45f85b7-4c6tq" WorkloadEndpoint="ci--4344--1--1--674877a9a3dd0f552365.c.flatcar--212911.internal-k8s-calico--apiserver--6f45f85b7--4c6tq-eth0" Jul 7 00:18:02.820838 containerd[1580]: time="2025-07-07T00:18:02.820764692Z" level=info msg="connecting to shim 335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230" address="unix:///run/containerd/s/bd47fe09a023a84181f78ea8926859430c3ed0f9cb5a40b752070fddf4262aef" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:02.874148 containerd[1580]: time="2025-07-07T00:18:02.873994941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57d59d975-pdszg,Uid:f4f95e6f-b466-498b-a081-ef0cabb35977,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8\"" Jul 7 00:18:02.911398 systemd[1]: Started cri-containerd-335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230.scope - libcontainer container 335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230. Jul 7 00:18:03.291881 kubelet[2782]: I0707 00:18:03.291640 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:18:03.305864 containerd[1580]: time="2025-07-07T00:18:03.305740731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f45f85b7-4c6tq,Uid:fc888973-7a8d-44bc-afa1-d07672f1bdb4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230\"" Jul 7 00:18:03.912578 systemd-networkd[1483]: cali8513e478d5c: Gained IPv6LL Jul 7 00:18:04.233754 systemd-networkd[1483]: calid57e4e2fd70: Gained IPv6LL Jul 7 00:18:04.719687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718500842.mount: Deactivated successfully. Jul 7 00:18:05.230719 systemd-networkd[1483]: vxlan.calico: Link UP Jul 7 00:18:05.235287 systemd-networkd[1483]: vxlan.calico: Gained carrier Jul 7 00:18:06.267200 containerd[1580]: time="2025-07-07T00:18:06.267129171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:06.269064 containerd[1580]: time="2025-07-07T00:18:06.269011294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:18:06.271846 containerd[1580]: time="2025-07-07T00:18:06.271767338Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:06.277754 containerd[1580]: time="2025-07-07T00:18:06.277531551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:06.280069 containerd[1580]: time="2025-07-07T00:18:06.279922819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.87917079s" Jul 7 00:18:06.280069 containerd[1580]: time="2025-07-07T00:18:06.279974808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:18:06.284282 containerd[1580]: time="2025-07-07T00:18:06.283993849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:18:06.288950 containerd[1580]: time="2025-07-07T00:18:06.288803136Z" level=info msg="CreateContainer within sandbox \"4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:18:06.305588 containerd[1580]: time="2025-07-07T00:18:06.304493384Z" level=info msg="Container a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:06.332601 containerd[1580]: time="2025-07-07T00:18:06.332525794Z" level=info msg="CreateContainer within sandbox \"4cde09919224cc9b7efa601db7ece714b71ee81e50e3e1f98efb4f8873159981\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\"" Jul 7 00:18:06.336468 containerd[1580]: time="2025-07-07T00:18:06.335865533Z" level=info msg="StartContainer for \"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\"" Jul 7 00:18:06.340295 containerd[1580]: time="2025-07-07T00:18:06.340223248Z" level=info msg="connecting to shim a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c" address="unix:///run/containerd/s/31a599a103d1dd3f759310ea4bbb9f90d40f86d076162e90aacdc68ffc25db17" protocol=ttrpc version=3 Jul 7 00:18:06.379547 systemd[1]: Started cri-containerd-a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c.scope - libcontainer container a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c. Jul 7 00:18:06.459183 containerd[1580]: time="2025-07-07T00:18:06.459059773Z" level=info msg="StartContainer for \"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" returns successfully" Jul 7 00:18:06.723932 kubelet[2782]: I0707 00:18:06.723539 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-q26kq" podStartSLOduration=27.023505513 podStartE2EDuration="32.723509847s" podCreationTimestamp="2025-07-07 00:17:34 +0000 UTC" firstStartedPulling="2025-07-07 00:18:00.58308893 +0000 UTC m=+46.686148334" lastFinishedPulling="2025-07-07 00:18:06.283093251 +0000 UTC m=+52.386152668" observedRunningTime="2025-07-07 00:18:06.722197871 +0000 UTC m=+52.825257287" watchObservedRunningTime="2025-07-07 00:18:06.723509847 +0000 UTC m=+52.826569306" Jul 7 00:18:06.836280 containerd[1580]: time="2025-07-07T00:18:06.836152519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"0096d7237a4e0bcfd10776115102ad63560e889e015067f384aa2f3b25e5e5ea\" pid:5065 exit_status:1 exited_at:{seconds:1751847486 nanos:835386593}" Jul 7 00:18:07.177591 systemd-networkd[1483]: vxlan.calico: Gained IPv6LL Jul 7 00:18:07.817560 containerd[1580]: time="2025-07-07T00:18:07.817461166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"7b0642b6448c9625139983d3ba24fc35a00fda2e03dff388b2a81c8c9ff1d089\" pid:5089 exit_status:1 exited_at:{seconds:1751847487 nanos:816278523}" Jul 7 00:18:08.893956 containerd[1580]: time="2025-07-07T00:18:08.893803191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"70e757c0b87eb426e7e42b4be73be3607fbf70e8f5ca44e8dc1d786d336a89e0\" pid:5118 exit_status:1 exited_at:{seconds:1751847488 nanos:892992732}" Jul 7 00:18:09.110848 containerd[1580]: time="2025-07-07T00:18:09.110793626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:09.112027 containerd[1580]: time="2025-07-07T00:18:09.111982864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:18:09.114268 containerd[1580]: time="2025-07-07T00:18:09.113081585Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:09.117179 containerd[1580]: time="2025-07-07T00:18:09.117134146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:09.118322 containerd[1580]: time="2025-07-07T00:18:09.118277049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.833491208s" Jul 7 00:18:09.118445 containerd[1580]: time="2025-07-07T00:18:09.118328556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:18:09.120812 containerd[1580]: time="2025-07-07T00:18:09.120778714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:18:09.122733 containerd[1580]: time="2025-07-07T00:18:09.122681300Z" level=info msg="CreateContainer within sandbox \"a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:18:09.134770 containerd[1580]: time="2025-07-07T00:18:09.134648733Z" level=info msg="Container 01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:09.148925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556780705.mount: Deactivated successfully. Jul 7 00:18:09.151206 containerd[1580]: time="2025-07-07T00:18:09.151148346Z" level=info msg="CreateContainer within sandbox \"a07192147a8896b9a054a0dab28a2dfc9a9a8440f69d8ba92b63626094026e6d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91\"" Jul 7 00:18:09.153237 containerd[1580]: time="2025-07-07T00:18:09.153202269Z" level=info msg="StartContainer for \"01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91\"" Jul 7 00:18:09.155967 containerd[1580]: time="2025-07-07T00:18:09.155913552Z" level=info msg="connecting to shim 01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91" address="unix:///run/containerd/s/2cf2e6040e2f3a3192035a6ed4d1d0713f566b640ae903ec9f595205ea8a4277" protocol=ttrpc version=3 Jul 7 00:18:09.197638 systemd[1]: Started cri-containerd-01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91.scope - libcontainer container 01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91. Jul 7 00:18:09.277893 containerd[1580]: time="2025-07-07T00:18:09.277840032Z" level=info msg="StartContainer for \"01351227fe231e39fdb902254b53cc524111ef77d4b1fbbb54522548c2316c91\" returns successfully" Jul 7 00:18:09.292293 ntpd[1497]: Listen normally on 8 vxlan.calico 192.168.119.192:123 Jul 7 00:18:09.292440 ntpd[1497]: Listen normally on 9 calicc8e4b8a0e7 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 8 vxlan.calico 192.168.119.192:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 9 calicc8e4b8a0e7 [fe80::ecee:eeff:feee:eeee%4]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 10 calic85fbffe89d [fe80::ecee:eeff:feee:eeee%5]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 11 cali2f62ce11215 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 12 cali9306c4be5eb [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 13 calidbf423eda09 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 14 calia7071391590 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 15 calid57e4e2fd70 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 16 cali8513e478d5c [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:18:09.293117 ntpd[1497]: 7 Jul 00:18:09 ntpd[1497]: Listen normally on 17 vxlan.calico [fe80::6450:27ff:fefb:f5a2%12]:123 Jul 7 00:18:09.292525 ntpd[1497]: Listen normally on 10 calic85fbffe89d [fe80::ecee:eeff:feee:eeee%5]:123 Jul 7 00:18:09.292583 ntpd[1497]: Listen normally on 11 cali2f62ce11215 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 7 00:18:09.292633 ntpd[1497]: Listen normally on 12 cali9306c4be5eb [fe80::ecee:eeff:feee:eeee%7]:123 Jul 7 00:18:09.292683 ntpd[1497]: Listen normally on 13 calidbf423eda09 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 7 00:18:09.292737 ntpd[1497]: Listen normally on 14 calia7071391590 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 7 00:18:09.292793 ntpd[1497]: Listen normally on 15 calid57e4e2fd70 [fe80::ecee:eeff:feee:eeee%10]:123 Jul 7 00:18:09.292845 ntpd[1497]: Listen normally on 16 cali8513e478d5c [fe80::ecee:eeff:feee:eeee%11]:123 Jul 7 00:18:09.292898 ntpd[1497]: Listen normally on 17 vxlan.calico [fe80::6450:27ff:fefb:f5a2%12]:123 Jul 7 00:18:09.726789 kubelet[2782]: I0707 00:18:09.726713 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f45f85b7-gq49t" podStartSLOduration=31.40678966 podStartE2EDuration="39.726686453s" podCreationTimestamp="2025-07-07 00:17:30 +0000 UTC" firstStartedPulling="2025-07-07 00:18:00.800066554 +0000 UTC m=+46.903125959" lastFinishedPulling="2025-07-07 00:18:09.119963344 +0000 UTC m=+55.223022752" observedRunningTime="2025-07-07 00:18:09.725973559 +0000 UTC m=+55.829032982" watchObservedRunningTime="2025-07-07 00:18:09.726686453 +0000 UTC m=+55.829745866" Jul 7 00:18:10.603759 containerd[1580]: time="2025-07-07T00:18:10.603572317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:10.607338 containerd[1580]: time="2025-07-07T00:18:10.607288262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:18:10.608945 containerd[1580]: time="2025-07-07T00:18:10.608897274Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:10.628059 containerd[1580]: time="2025-07-07T00:18:10.627963917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:10.630423 containerd[1580]: time="2025-07-07T00:18:10.630043186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.50921876s" Jul 7 00:18:10.630423 containerd[1580]: time="2025-07-07T00:18:10.630106458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:18:10.634225 containerd[1580]: time="2025-07-07T00:18:10.633942014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:18:10.636665 containerd[1580]: time="2025-07-07T00:18:10.636625838Z" level=info msg="CreateContainer within sandbox \"a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:18:10.654498 containerd[1580]: time="2025-07-07T00:18:10.654441718Z" level=info msg="Container a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:10.686850 containerd[1580]: time="2025-07-07T00:18:10.686785104Z" level=info msg="CreateContainer within sandbox \"a85758a3e04620e4284c9a06409b70a5717591b7916c49ae42584896248aa010\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51\"" Jul 7 00:18:10.688594 containerd[1580]: time="2025-07-07T00:18:10.688463933Z" level=info msg="StartContainer for \"a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51\"" Jul 7 00:18:10.692812 containerd[1580]: time="2025-07-07T00:18:10.692753950Z" level=info msg="connecting to shim a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51" address="unix:///run/containerd/s/2d35d818d1ccd86d38693c2c8e302a495741ac328f1df83e7b72e825b1f2c199" protocol=ttrpc version=3 Jul 7 00:18:10.741145 kubelet[2782]: I0707 00:18:10.740965 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:18:10.749555 systemd[1]: Started cri-containerd-a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51.scope - libcontainer container a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51. Jul 7 00:18:10.843214 containerd[1580]: time="2025-07-07T00:18:10.841922671Z" level=info msg="StartContainer for \"a634bb27f1c70c4931c16f8538a08cc84f7de5750a43c59993028c9b6e39ba51\" returns successfully" Jul 7 00:18:11.278982 kubelet[2782]: I0707 00:18:11.278854 2782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:18:11.278982 kubelet[2782]: I0707 00:18:11.278898 2782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:18:13.137919 containerd[1580]: time="2025-07-07T00:18:13.137834343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:13.139470 containerd[1580]: time="2025-07-07T00:18:13.139402789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:18:13.140975 containerd[1580]: time="2025-07-07T00:18:13.140905920Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:13.143644 containerd[1580]: time="2025-07-07T00:18:13.143576663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:13.144749 containerd[1580]: time="2025-07-07T00:18:13.144575631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.510584882s" Jul 7 00:18:13.144749 containerd[1580]: time="2025-07-07T00:18:13.144620405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:18:13.147286 containerd[1580]: time="2025-07-07T00:18:13.147166781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:18:13.166372 containerd[1580]: time="2025-07-07T00:18:13.166319674Z" level=info msg="CreateContainer within sandbox \"9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:18:13.184269 containerd[1580]: time="2025-07-07T00:18:13.180518830Z" level=info msg="Container e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:13.194641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642831539.mount: Deactivated successfully. Jul 7 00:18:13.196446 containerd[1580]: time="2025-07-07T00:18:13.196399747Z" level=info msg="CreateContainer within sandbox \"9d4f3bc3fd22ace56013700fb142efc05553ccb7b876b8a7d9e41a92ae254ad8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\"" Jul 7 00:18:13.199023 containerd[1580]: time="2025-07-07T00:18:13.197814786Z" level=info msg="StartContainer for \"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\"" Jul 7 00:18:13.200127 containerd[1580]: time="2025-07-07T00:18:13.200077531Z" level=info msg="connecting to shim e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4" address="unix:///run/containerd/s/16ea7a106b68c87625e85f602ba941a81f43f8e48137af1f00c66a448a123a71" protocol=ttrpc version=3 Jul 7 00:18:13.237713 systemd[1]: Started cri-containerd-e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4.scope - libcontainer container e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4. Jul 7 00:18:13.318147 containerd[1580]: time="2025-07-07T00:18:13.318057802Z" level=info msg="StartContainer for \"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\" returns successfully" Jul 7 00:18:13.349653 containerd[1580]: time="2025-07-07T00:18:13.349595210Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:13.353540 containerd[1580]: time="2025-07-07T00:18:13.352510180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:18:13.356139 containerd[1580]: time="2025-07-07T00:18:13.356075099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 208.864487ms" Jul 7 00:18:13.356360 containerd[1580]: time="2025-07-07T00:18:13.356333692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:18:13.361334 containerd[1580]: time="2025-07-07T00:18:13.361288364Z" level=info msg="CreateContainer within sandbox \"335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:18:13.372372 containerd[1580]: time="2025-07-07T00:18:13.372306763Z" level=info msg="Container 20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:13.384630 containerd[1580]: time="2025-07-07T00:18:13.384569379Z" level=info msg="CreateContainer within sandbox \"335e9ced9654f8ad6ae09e285cab57d85d64ada05dffbdc1d6f69c97cce18230\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde\"" Jul 7 00:18:13.386710 containerd[1580]: time="2025-07-07T00:18:13.386661347Z" level=info msg="StartContainer for \"20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde\"" Jul 7 00:18:13.390655 containerd[1580]: time="2025-07-07T00:18:13.390169467Z" level=info msg="connecting to shim 20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde" address="unix:///run/containerd/s/bd47fe09a023a84181f78ea8926859430c3ed0f9cb5a40b752070fddf4262aef" protocol=ttrpc version=3 Jul 7 00:18:13.437711 systemd[1]: Started cri-containerd-20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde.scope - libcontainer container 20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde. Jul 7 00:18:13.642121 containerd[1580]: time="2025-07-07T00:18:13.641840466Z" level=info msg="StartContainer for \"20d08606a5f3241e39963844495f4fdbc67cc43bef2c3b7f6030f2a8d485efde\" returns successfully" Jul 7 00:18:13.790061 kubelet[2782]: I0707 00:18:13.789960 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xf2tr" podStartSLOduration=28.371269745 podStartE2EDuration="38.789934257s" podCreationTimestamp="2025-07-07 00:17:35 +0000 UTC" firstStartedPulling="2025-07-07 00:18:00.214360219 +0000 UTC m=+46.317419626" lastFinishedPulling="2025-07-07 00:18:10.633024744 +0000 UTC m=+56.736084138" observedRunningTime="2025-07-07 00:18:11.780223075 +0000 UTC m=+57.883282492" watchObservedRunningTime="2025-07-07 00:18:13.789934257 +0000 UTC m=+59.892993676" Jul 7 00:18:13.793753 kubelet[2782]: I0707 00:18:13.793572 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f45f85b7-4c6tq" podStartSLOduration=33.746156313 podStartE2EDuration="43.793548622s" podCreationTimestamp="2025-07-07 00:17:30 +0000 UTC" firstStartedPulling="2025-07-07 00:18:03.310288999 +0000 UTC m=+49.413348406" lastFinishedPulling="2025-07-07 00:18:13.357681307 +0000 UTC m=+59.460740715" observedRunningTime="2025-07-07 00:18:13.788396283 +0000 UTC m=+59.891455702" watchObservedRunningTime="2025-07-07 00:18:13.793548622 +0000 UTC m=+59.896608041" Jul 7 00:18:13.890846 containerd[1580]: time="2025-07-07T00:18:13.890785265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\" id:\"f562594c83b42ac1c8071425c39ce260257f5090ca5daab7a9474ec7e564feee\" pid:5310 exited_at:{seconds:1751847493 nanos:889444385}" Jul 7 00:18:13.916414 kubelet[2782]: I0707 00:18:13.916095 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57d59d975-pdszg" podStartSLOduration=28.652631894 podStartE2EDuration="38.916067588s" podCreationTimestamp="2025-07-07 00:17:35 +0000 UTC" firstStartedPulling="2025-07-07 00:18:02.882378187 +0000 UTC m=+48.985437595" lastFinishedPulling="2025-07-07 00:18:13.145813877 +0000 UTC m=+59.248873289" observedRunningTime="2025-07-07 00:18:13.824678191 +0000 UTC m=+59.927737612" watchObservedRunningTime="2025-07-07 00:18:13.916067588 +0000 UTC m=+60.019127006" Jul 7 00:18:14.766089 kubelet[2782]: I0707 00:18:14.765950 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:18:17.927224 kubelet[2782]: I0707 00:18:17.926849 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:18:24.464010 containerd[1580]: time="2025-07-07T00:18:24.463947492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\" id:\"4468e1112300808b47d910d1e4d30d6bdff3aecd93b0785fe4faeabfa51795f5\" pid:5350 exited_at:{seconds:1751847504 nanos:463530763}" Jul 7 00:18:28.183941 containerd[1580]: time="2025-07-07T00:18:28.183871832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"c614b6f0cee9b0ac15cb6218addeede05ad120564c5ab01792bde14683a43a73\" pid:5384 exited_at:{seconds:1751847508 nanos:182581899}" Jul 7 00:18:30.293632 systemd[1]: Started sshd@9-10.128.0.74:22-139.178.68.195:40920.service - OpenSSH per-connection server daemon (139.178.68.195:40920). Jul 7 00:18:30.625840 sshd[5396]: Accepted publickey for core from 139.178.68.195 port 40920 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:30.628880 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:30.639879 systemd-logind[1511]: New session 10 of user core. Jul 7 00:18:30.646759 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:18:31.032817 sshd[5401]: Connection closed by 139.178.68.195 port 40920 Jul 7 00:18:31.033801 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:31.044491 systemd[1]: sshd@9-10.128.0.74:22-139.178.68.195:40920.service: Deactivated successfully. Jul 7 00:18:31.050140 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:18:31.054327 systemd-logind[1511]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:18:31.060945 systemd-logind[1511]: Removed session 10. Jul 7 00:18:36.092951 systemd[1]: Started sshd@10-10.128.0.74:22-139.178.68.195:40928.service - OpenSSH per-connection server daemon (139.178.68.195:40928). Jul 7 00:18:36.421863 sshd[5417]: Accepted publickey for core from 139.178.68.195 port 40928 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:36.425564 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:36.438584 systemd-logind[1511]: New session 11 of user core. Jul 7 00:18:36.445539 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:18:36.850277 sshd[5419]: Connection closed by 139.178.68.195 port 40928 Jul 7 00:18:36.851050 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:36.861308 systemd-logind[1511]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:18:36.862768 systemd[1]: sshd@10-10.128.0.74:22-139.178.68.195:40928.service: Deactivated successfully. Jul 7 00:18:36.871969 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:18:36.878849 systemd-logind[1511]: Removed session 11. Jul 7 00:18:38.852939 containerd[1580]: time="2025-07-07T00:18:38.852874170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"34635d39d0b8d0e70bd25206cd85fd221522cb01bb319cf61af47e85fd94f10d\" pid:5444 exited_at:{seconds:1751847518 nanos:851958302}" Jul 7 00:18:39.456474 kubelet[2782]: I0707 00:18:39.455960 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:18:41.909645 systemd[1]: Started sshd@11-10.128.0.74:22-139.178.68.195:51424.service - OpenSSH per-connection server daemon (139.178.68.195:51424). Jul 7 00:18:42.237888 sshd[5460]: Accepted publickey for core from 139.178.68.195 port 51424 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:42.240560 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:42.254840 systemd-logind[1511]: New session 12 of user core. Jul 7 00:18:42.262555 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:18:42.592158 sshd[5462]: Connection closed by 139.178.68.195 port 51424 Jul 7 00:18:42.593557 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:42.601563 systemd[1]: sshd@11-10.128.0.74:22-139.178.68.195:51424.service: Deactivated successfully. Jul 7 00:18:42.606366 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:18:42.609529 systemd-logind[1511]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:18:42.614765 systemd-logind[1511]: Removed session 12. Jul 7 00:18:42.651893 systemd[1]: Started sshd@12-10.128.0.74:22-139.178.68.195:51430.service - OpenSSH per-connection server daemon (139.178.68.195:51430). Jul 7 00:18:42.985450 sshd[5475]: Accepted publickey for core from 139.178.68.195 port 51430 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:42.986781 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:42.999342 systemd-logind[1511]: New session 13 of user core. Jul 7 00:18:43.003235 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:18:43.482193 sshd[5477]: Connection closed by 139.178.68.195 port 51430 Jul 7 00:18:43.483236 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:43.496925 systemd[1]: sshd@12-10.128.0.74:22-139.178.68.195:51430.service: Deactivated successfully. Jul 7 00:18:43.497814 systemd-logind[1511]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:18:43.506717 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:18:43.512558 systemd-logind[1511]: Removed session 13. Jul 7 00:18:43.540648 systemd[1]: Started sshd@13-10.128.0.74:22-139.178.68.195:51442.service - OpenSSH per-connection server daemon (139.178.68.195:51442). Jul 7 00:18:43.833022 containerd[1580]: time="2025-07-07T00:18:43.832965933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\" id:\"f117f0edaff2ece06387b2ca9b59349aaeb78f568a30c401afb1da41f5c55c0f\" pid:5501 exited_at:{seconds:1751847523 nanos:832575443}" Jul 7 00:18:43.876199 sshd[5487]: Accepted publickey for core from 139.178.68.195 port 51442 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:43.878488 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:43.888145 systemd-logind[1511]: New session 14 of user core. Jul 7 00:18:43.894529 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:18:44.199333 sshd[5510]: Connection closed by 139.178.68.195 port 51442 Jul 7 00:18:44.201216 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:44.208473 systemd[1]: sshd@13-10.128.0.74:22-139.178.68.195:51442.service: Deactivated successfully. Jul 7 00:18:44.211627 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:18:44.213454 systemd-logind[1511]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:18:44.215813 systemd-logind[1511]: Removed session 14. Jul 7 00:18:47.840865 containerd[1580]: time="2025-07-07T00:18:47.840793358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\" id:\"2fbb1df00696fa46916901bfe3081f4b3743d7189e1dbf04e5086df09be05a65\" pid:5542 exited_at:{seconds:1751847527 nanos:840293254}" Jul 7 00:18:49.260839 systemd[1]: Started sshd@14-10.128.0.74:22-139.178.68.195:35900.service - OpenSSH per-connection server daemon (139.178.68.195:35900). Jul 7 00:18:49.592808 sshd[5556]: Accepted publickey for core from 139.178.68.195 port 35900 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:49.595270 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:49.603712 systemd-logind[1511]: New session 15 of user core. Jul 7 00:18:49.612707 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:18:50.017425 sshd[5558]: Connection closed by 139.178.68.195 port 35900 Jul 7 00:18:50.018579 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:50.026615 systemd[1]: sshd@14-10.128.0.74:22-139.178.68.195:35900.service: Deactivated successfully. Jul 7 00:18:50.027494 systemd-logind[1511]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:18:50.036740 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:18:50.046695 systemd-logind[1511]: Removed session 15. Jul 7 00:18:54.535322 containerd[1580]: time="2025-07-07T00:18:54.535213706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\" id:\"44fa825dcca352120647a99ab7f938a4331327812bee19d9c0cf92a53fb38f3d\" pid:5585 exited_at:{seconds:1751847534 nanos:534383447}" Jul 7 00:18:55.078060 systemd[1]: Started sshd@15-10.128.0.74:22-139.178.68.195:35908.service - OpenSSH per-connection server daemon (139.178.68.195:35908). Jul 7 00:18:55.398969 sshd[5597]: Accepted publickey for core from 139.178.68.195 port 35908 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:18:55.402004 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:18:55.411321 systemd-logind[1511]: New session 16 of user core. Jul 7 00:18:55.421451 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:18:55.748602 sshd[5599]: Connection closed by 139.178.68.195 port 35908 Jul 7 00:18:55.751555 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:55.760478 systemd[1]: sshd@15-10.128.0.74:22-139.178.68.195:35908.service: Deactivated successfully. Jul 7 00:18:55.764771 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:18:55.768284 systemd-logind[1511]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:18:55.771060 systemd-logind[1511]: Removed session 16. Jul 7 00:19:00.806645 systemd[1]: Started sshd@16-10.128.0.74:22-139.178.68.195:38274.service - OpenSSH per-connection server daemon (139.178.68.195:38274). Jul 7 00:19:00.867801 systemd[1]: Started sshd@17-10.128.0.74:22-103.97.178.235:33932.service - OpenSSH per-connection server daemon (103.97.178.235:33932). Jul 7 00:19:01.141796 sshd[5612]: Accepted publickey for core from 139.178.68.195 port 38274 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:01.144740 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:01.157330 systemd-logind[1511]: New session 17 of user core. Jul 7 00:19:01.160701 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:19:01.291058 sshd[5615]: Connection closed by 103.97.178.235 port 33932 [preauth] Jul 7 00:19:01.295424 systemd[1]: sshd@17-10.128.0.74:22-103.97.178.235:33932.service: Deactivated successfully. Jul 7 00:19:01.495966 sshd[5617]: Connection closed by 139.178.68.195 port 38274 Jul 7 00:19:01.497667 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:01.506790 systemd[1]: sshd@16-10.128.0.74:22-139.178.68.195:38274.service: Deactivated successfully. Jul 7 00:19:01.512303 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:19:01.515512 systemd-logind[1511]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:19:01.521061 systemd-logind[1511]: Removed session 17. Jul 7 00:19:06.561402 systemd[1]: Started sshd@18-10.128.0.74:22-139.178.68.195:38290.service - OpenSSH per-connection server daemon (139.178.68.195:38290). Jul 7 00:19:06.893271 sshd[5631]: Accepted publickey for core from 139.178.68.195 port 38290 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:06.895899 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:06.904788 systemd-logind[1511]: New session 18 of user core. Jul 7 00:19:06.919594 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:19:07.287919 sshd[5633]: Connection closed by 139.178.68.195 port 38290 Jul 7 00:19:07.286762 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:07.295918 systemd[1]: sshd@18-10.128.0.74:22-139.178.68.195:38290.service: Deactivated successfully. Jul 7 00:19:07.296749 systemd-logind[1511]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:19:07.301429 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:19:07.308849 systemd-logind[1511]: Removed session 18. Jul 7 00:19:07.348310 systemd[1]: Started sshd@19-10.128.0.74:22-139.178.68.195:38294.service - OpenSSH per-connection server daemon (139.178.68.195:38294). Jul 7 00:19:07.685363 sshd[5645]: Accepted publickey for core from 139.178.68.195 port 38294 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:07.687327 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:07.695942 systemd-logind[1511]: New session 19 of user core. Jul 7 00:19:07.703463 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:19:08.111152 sshd[5647]: Connection closed by 139.178.68.195 port 38294 Jul 7 00:19:08.112702 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:08.121642 systemd[1]: sshd@19-10.128.0.74:22-139.178.68.195:38294.service: Deactivated successfully. Jul 7 00:19:08.126838 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:19:08.129597 systemd-logind[1511]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:19:08.133918 systemd-logind[1511]: Removed session 19. Jul 7 00:19:08.169822 systemd[1]: Started sshd@20-10.128.0.74:22-139.178.68.195:40002.service - OpenSSH per-connection server daemon (139.178.68.195:40002). Jul 7 00:19:08.513358 sshd[5657]: Accepted publickey for core from 139.178.68.195 port 40002 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:08.516608 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:08.524611 systemd-logind[1511]: New session 20 of user core. Jul 7 00:19:08.531506 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:19:08.978885 containerd[1580]: time="2025-07-07T00:19:08.978614071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"a1a6136714b0be192bcd7585e23ad3caa0e0018d2bd6c092c58ad456ee6717ce\" pid:5677 exited_at:{seconds:1751847548 nanos:976108553}" Jul 7 00:19:09.770381 sshd[5659]: Connection closed by 139.178.68.195 port 40002 Jul 7 00:19:09.773223 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:09.781833 systemd-logind[1511]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:19:09.782752 systemd[1]: sshd@20-10.128.0.74:22-139.178.68.195:40002.service: Deactivated successfully. Jul 7 00:19:09.789499 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:19:09.797116 systemd-logind[1511]: Removed session 20. Jul 7 00:19:09.830675 systemd[1]: Started sshd@21-10.128.0.74:22-139.178.68.195:40008.service - OpenSSH per-connection server daemon (139.178.68.195:40008). Jul 7 00:19:10.161378 sshd[5698]: Accepted publickey for core from 139.178.68.195 port 40008 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:10.165318 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:10.178406 systemd-logind[1511]: New session 21 of user core. Jul 7 00:19:10.185496 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:19:10.730876 sshd[5700]: Connection closed by 139.178.68.195 port 40008 Jul 7 00:19:10.732073 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:10.738999 systemd-logind[1511]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:19:10.739988 systemd[1]: sshd@21-10.128.0.74:22-139.178.68.195:40008.service: Deactivated successfully. Jul 7 00:19:10.745868 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:19:10.751841 systemd-logind[1511]: Removed session 21. Jul 7 00:19:10.792987 systemd[1]: Started sshd@22-10.128.0.74:22-139.178.68.195:40010.service - OpenSSH per-connection server daemon (139.178.68.195:40010). Jul 7 00:19:11.130376 sshd[5710]: Accepted publickey for core from 139.178.68.195 port 40010 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:11.132540 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:11.142916 systemd-logind[1511]: New session 22 of user core. Jul 7 00:19:11.152983 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:19:11.476861 sshd[5712]: Connection closed by 139.178.68.195 port 40010 Jul 7 00:19:11.477782 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:11.490804 systemd[1]: sshd@22-10.128.0.74:22-139.178.68.195:40010.service: Deactivated successfully. Jul 7 00:19:11.492852 systemd-logind[1511]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:19:11.498949 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:19:11.506031 systemd-logind[1511]: Removed session 22. Jul 7 00:19:13.840711 containerd[1580]: time="2025-07-07T00:19:13.840660634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5a8c4d5b7d145cb87dc2351db2caabe35e48194ffe601e8b794a8352543fec4\" id:\"8e52dfbd058b7c6b5235d061d19c00be11533bc4c8dac69f96996bab089df50f\" pid:5737 exited_at:{seconds:1751847553 nanos:839906662}" Jul 7 00:19:16.537630 systemd[1]: Started sshd@23-10.128.0.74:22-139.178.68.195:40014.service - OpenSSH per-connection server daemon (139.178.68.195:40014). Jul 7 00:19:16.860693 sshd[5750]: Accepted publickey for core from 139.178.68.195 port 40014 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:16.864838 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:16.877081 systemd-logind[1511]: New session 23 of user core. Jul 7 00:19:16.884572 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:19:17.298295 sshd[5752]: Connection closed by 139.178.68.195 port 40014 Jul 7 00:19:17.297538 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:17.305161 systemd-logind[1511]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:19:17.306382 systemd[1]: sshd@23-10.128.0.74:22-139.178.68.195:40014.service: Deactivated successfully. Jul 7 00:19:17.311612 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:19:17.318438 systemd-logind[1511]: Removed session 23. Jul 7 00:19:22.357001 systemd[1]: Started sshd@24-10.128.0.74:22-139.178.68.195:53006.service - OpenSSH per-connection server daemon (139.178.68.195:53006). Jul 7 00:19:22.682477 sshd[5767]: Accepted publickey for core from 139.178.68.195 port 53006 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:22.687268 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:22.696081 systemd-logind[1511]: New session 24 of user core. Jul 7 00:19:22.703524 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:19:23.025366 sshd[5769]: Connection closed by 139.178.68.195 port 53006 Jul 7 00:19:23.026680 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:23.034281 systemd[1]: sshd@24-10.128.0.74:22-139.178.68.195:53006.service: Deactivated successfully. Jul 7 00:19:23.039958 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:19:23.043850 systemd-logind[1511]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:19:23.047168 systemd-logind[1511]: Removed session 24. Jul 7 00:19:24.564911 containerd[1580]: time="2025-07-07T00:19:24.564778725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15f67568c61d909928ef088ff4750a0a94fde30e6975090a9a76847a70a6c747\" id:\"28fca3420b5c15eac4a02838a869f6468afe8b861f04feef07140075c66c6f3c\" pid:5791 exited_at:{seconds:1751847564 nanos:563752580}" Jul 7 00:19:25.582326 update_engine[1512]: I20250707 00:19:25.581632 1512 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:19:25.582326 update_engine[1512]: I20250707 00:19:25.581708 1512 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:19:25.582326 update_engine[1512]: I20250707 00:19:25.581959 1512 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:19:25.583616 update_engine[1512]: I20250707 00:19:25.583521 1512 omaha_request_params.cc:62] Current group set to beta Jul 7 00:19:25.583911 update_engine[1512]: I20250707 00:19:25.583878 1512 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:19:25.584127 update_engine[1512]: I20250707 00:19:25.584076 1512 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:19:25.585311 locksmithd[1591]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:19:25.585677 update_engine[1512]: I20250707 00:19:25.584236 1512 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:19:25.585677 update_engine[1512]: I20250707 00:19:25.584653 1512 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:19:25.585677 update_engine[1512]: I20250707 00:19:25.584756 1512 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:19:25.585677 update_engine[1512]: I20250707 00:19:25.584770 1512 omaha_request_action.cc:272] Request: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: Jul 7 00:19:25.585677 update_engine[1512]: I20250707 00:19:25.584821 1512 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:19:25.588170 update_engine[1512]: I20250707 00:19:25.588137 1512 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:19:25.588849 update_engine[1512]: I20250707 00:19:25.588812 1512 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:19:25.664035 update_engine[1512]: E20250707 00:19:25.663961 1512 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:19:25.664436 update_engine[1512]: I20250707 00:19:25.664377 1512 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:19:28.085181 systemd[1]: Started sshd@25-10.128.0.74:22-139.178.68.195:53018.service - OpenSSH per-connection server daemon (139.178.68.195:53018). Jul 7 00:19:28.220960 containerd[1580]: time="2025-07-07T00:19:28.220906515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a984abe3cad983de337e0cc409e5f146dbdff0c27bfb7300aa0e58388b8a215c\" id:\"ff0e4ef709174de6725af0c87edbdac6c6821e56eb6ac6612d74328a06733aa9\" pid:5826 exited_at:{seconds:1751847568 nanos:220223338}" Jul 7 00:19:28.430819 sshd[5835]: Accepted publickey for core from 139.178.68.195 port 53018 ssh2: RSA SHA256:PQnsEjhgwfO+4Rl/MODJwLHa9iKcGzrEqyhX3zjOGjc Jul 7 00:19:28.432612 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:28.443126 systemd-logind[1511]: New session 25 of user core. Jul 7 00:19:28.451511 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:19:28.783377 sshd[5839]: Connection closed by 139.178.68.195 port 53018 Jul 7 00:19:28.784553 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:28.795785 systemd-logind[1511]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:19:28.797080 systemd[1]: sshd@25-10.128.0.74:22-139.178.68.195:53018.service: Deactivated successfully. Jul 7 00:19:28.805119 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:19:28.812934 systemd-logind[1511]: Removed session 25.