Aug 19 08:15:06.175406 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:15:06.175450 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:15:06.175467 kernel: BIOS-provided physical RAM map: Aug 19 08:15:06.175480 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 19 08:15:06.175493 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 19 08:15:06.175506 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 19 08:15:06.175525 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 19 08:15:06.175540 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 19 08:15:06.175554 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd32afff] usable Aug 19 08:15:06.175567 kernel: BIOS-e820: [mem 0x00000000bd32b000-0x00000000bd332fff] ACPI data Aug 19 08:15:06.175581 kernel: BIOS-e820: [mem 0x00000000bd333000-0x00000000bf8ecfff] usable Aug 19 08:15:06.175595 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Aug 19 08:15:06.175609 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 19 08:15:06.175623 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 19 08:15:06.175644 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 19 08:15:06.175659 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 19 08:15:06.175674 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 19 08:15:06.175689 kernel: NX (Execute Disable) protection: active Aug 19 08:15:06.175704 kernel: APIC: Static calls initialized Aug 19 08:15:06.175719 kernel: efi: EFI v2.7 by EDK II Aug 19 08:15:06.175735 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd32b018 Aug 19 08:15:06.175750 kernel: random: crng init done Aug 19 08:15:06.175769 kernel: secureboot: Secure boot disabled Aug 19 08:15:06.175783 kernel: SMBIOS 2.4 present. Aug 19 08:15:06.175798 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Aug 19 08:15:06.175813 kernel: DMI: Memory slots populated: 1/1 Aug 19 08:15:06.175828 kernel: Hypervisor detected: KVM Aug 19 08:15:06.175842 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 19 08:15:06.175857 kernel: kvm-clock: using sched offset of 15029081977 cycles Aug 19 08:15:06.175873 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 19 08:15:06.175888 kernel: tsc: Detected 2299.998 MHz processor Aug 19 08:15:06.175902 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:15:06.175922 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:15:06.175941 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 19 08:15:06.175973 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 19 08:15:06.175989 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:15:06.176035 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 19 08:15:06.176051 kernel: Using GB pages for direct mapping Aug 19 08:15:06.176066 kernel: ACPI: Early table checksum verification disabled Aug 19 08:15:06.176083 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 19 08:15:06.176109 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 19 08:15:06.176136 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 19 08:15:06.176154 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 19 08:15:06.176170 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 19 08:15:06.176187 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20241212) Aug 19 08:15:06.176205 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 19 08:15:06.176226 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 19 08:15:06.176243 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 19 08:15:06.176260 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 19 08:15:06.176277 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 19 08:15:06.176294 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 19 08:15:06.176311 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 19 08:15:06.176328 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 19 08:15:06.176345 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 19 08:15:06.176362 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 19 08:15:06.176383 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 19 08:15:06.176400 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 19 08:15:06.176417 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 19 08:15:06.176433 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 19 08:15:06.176450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 19 08:15:06.176466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 19 08:15:06.176482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 19 08:15:06.176500 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Aug 19 08:15:06.176515 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Aug 19 08:15:06.176536 kernel: NODE_DATA(0) allocated [mem 0x21fff6dc0-0x21fffdfff] Aug 19 08:15:06.176553 kernel: Zone ranges: Aug 19 08:15:06.176571 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:15:06.176587 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 19 08:15:06.176604 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 19 08:15:06.176621 kernel: Device empty Aug 19 08:15:06.176637 kernel: Movable zone start for each node Aug 19 08:15:06.176653 kernel: Early memory node ranges Aug 19 08:15:06.176669 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 19 08:15:06.176684 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 19 08:15:06.176704 kernel: node 0: [mem 0x0000000000100000-0x00000000bd32afff] Aug 19 08:15:06.176722 kernel: node 0: [mem 0x00000000bd333000-0x00000000bf8ecfff] Aug 19 08:15:06.176739 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 19 08:15:06.176757 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 19 08:15:06.176774 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 19 08:15:06.176793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:15:06.176810 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 19 08:15:06.176828 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 19 08:15:06.176845 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Aug 19 08:15:06.176867 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 19 08:15:06.176885 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 19 08:15:06.176903 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 19 08:15:06.176921 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 19 08:15:06.176938 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:15:06.176955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 19 08:15:06.176973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:15:06.176990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 19 08:15:06.177042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 19 08:15:06.177065 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:15:06.177082 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:15:06.177099 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:15:06.177117 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:15:06.177143 kernel: CPU topo: Max. threads per core: 2 Aug 19 08:15:06.177160 kernel: CPU topo: Num. cores per package: 1 Aug 19 08:15:06.177176 kernel: CPU topo: Num. threads per package: 2 Aug 19 08:15:06.177192 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 19 08:15:06.177208 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 19 08:15:06.177228 kernel: Booting paravirtualized kernel on KVM Aug 19 08:15:06.177245 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:15:06.177261 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 19 08:15:06.177277 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 19 08:15:06.177292 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 19 08:15:06.177308 kernel: pcpu-alloc: [0] 0 1 Aug 19 08:15:06.177323 kernel: kvm-guest: PV spinlocks enabled Aug 19 08:15:06.177339 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 19 08:15:06.177357 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:15:06.177378 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:15:06.177395 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 19 08:15:06.177410 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 08:15:06.177426 kernel: Fallback order for Node 0: 0 Aug 19 08:15:06.177444 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965138 Aug 19 08:15:06.177461 kernel: Policy zone: Normal Aug 19 08:15:06.177478 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:15:06.177497 kernel: software IO TLB: area num 2. Aug 19 08:15:06.177532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 19 08:15:06.177550 kernel: Kernel/User page tables isolation: enabled Aug 19 08:15:06.177567 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:15:06.177588 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:15:06.177606 kernel: Dynamic Preempt: voluntary Aug 19 08:15:06.177623 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:15:06.177646 kernel: rcu: RCU event tracing is enabled. Aug 19 08:15:06.177664 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 19 08:15:06.177682 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:15:06.177703 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:15:06.177721 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:15:06.177738 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:15:06.177756 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 19 08:15:06.177773 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:15:06.177791 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:15:06.177808 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:15:06.177826 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 19 08:15:06.177848 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:15:06.177865 kernel: Console: colour dummy device 80x25 Aug 19 08:15:06.177883 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:15:06.177901 kernel: ACPI: Core revision 20240827 Aug 19 08:15:06.177918 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:15:06.177936 kernel: x2apic enabled Aug 19 08:15:06.177954 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:15:06.177972 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 19 08:15:06.177990 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 19 08:15:06.178039 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 19 08:15:06.181075 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 19 08:15:06.181098 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 19 08:15:06.181116 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:15:06.181144 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Aug 19 08:15:06.181161 kernel: Spectre V2 : Mitigation: IBRS Aug 19 08:15:06.181179 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:15:06.181197 kernel: RETBleed: Mitigation: IBRS Aug 19 08:15:06.181214 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 19 08:15:06.181238 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 19 08:15:06.181256 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 19 08:15:06.181274 kernel: MDS: Mitigation: Clear CPU buffers Aug 19 08:15:06.181292 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 19 08:15:06.181309 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 19 08:15:06.181327 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:15:06.181344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:15:06.181363 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:15:06.181384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:15:06.181402 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 19 08:15:06.181420 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:15:06.181438 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:15:06.181456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:15:06.181473 kernel: landlock: Up and running. Aug 19 08:15:06.181491 kernel: SELinux: Initializing. Aug 19 08:15:06.181509 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 19 08:15:06.181537 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 19 08:15:06.181560 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 19 08:15:06.181578 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 19 08:15:06.181596 kernel: signal: max sigframe size: 1776 Aug 19 08:15:06.181614 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:15:06.181633 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:15:06.181651 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:15:06.181669 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 19 08:15:06.181688 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:15:06.181706 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:15:06.181728 kernel: .... node #0, CPUs: #1 Aug 19 08:15:06.181748 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 19 08:15:06.181768 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 19 08:15:06.181786 kernel: smp: Brought up 1 node, 2 CPUs Aug 19 08:15:06.181804 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 19 08:15:06.181823 kernel: Memory: 7564260K/7860552K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 290712K reserved, 0K cma-reserved) Aug 19 08:15:06.181842 kernel: devtmpfs: initialized Aug 19 08:15:06.181858 kernel: x86/mm: Memory block size: 128MB Aug 19 08:15:06.181881 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 19 08:15:06.181900 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:15:06.181918 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 19 08:15:06.181937 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:15:06.181956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:15:06.181975 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:15:06.181994 kernel: audit: type=2000 audit(1755591301.113:1): state=initialized audit_enabled=0 res=1 Aug 19 08:15:06.182037 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:15:06.182056 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:15:06.182079 kernel: cpuidle: using governor menu Aug 19 08:15:06.182098 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:15:06.182115 kernel: dca service started, version 1.12.1 Aug 19 08:15:06.182142 kernel: PCI: Using configuration type 1 for base access Aug 19 08:15:06.182161 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:15:06.182180 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 08:15:06.182199 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 08:15:06.182217 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:15:06.182236 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:15:06.182259 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:15:06.182277 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:15:06.182296 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:15:06.182315 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 19 08:15:06.182333 kernel: ACPI: Interpreter enabled Aug 19 08:15:06.182352 kernel: ACPI: PM: (supports S0 S3 S5) Aug 19 08:15:06.182371 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:15:06.182389 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:15:06.182407 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 19 08:15:06.182429 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 19 08:15:06.182447 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 08:15:06.182727 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 19 08:15:06.182920 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 19 08:15:06.183140 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 19 08:15:06.183165 kernel: PCI host bridge to bus 0000:00 Aug 19 08:15:06.183347 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 19 08:15:06.183534 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 19 08:15:06.183703 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 19 08:15:06.183870 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 19 08:15:06.184592 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 08:15:06.184814 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 19 08:15:06.186065 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Aug 19 08:15:06.186356 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Aug 19 08:15:06.186565 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 19 08:15:06.186776 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Aug 19 08:15:06.186970 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 19 08:15:06.187280 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Aug 19 08:15:06.187486 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:15:06.187676 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Aug 19 08:15:06.187868 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Aug 19 08:15:06.189875 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 08:15:06.190116 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Aug 19 08:15:06.190317 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Aug 19 08:15:06.190343 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 19 08:15:06.190365 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 19 08:15:06.190385 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 19 08:15:06.190413 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 19 08:15:06.190432 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 19 08:15:06.190452 kernel: iommu: Default domain type: Translated Aug 19 08:15:06.190471 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:15:06.190491 kernel: efivars: Registered efivars operations Aug 19 08:15:06.190511 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:15:06.190530 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 19 08:15:06.190549 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 19 08:15:06.190568 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 19 08:15:06.190593 kernel: e820: reserve RAM buffer [mem 0xbd32b000-0xbfffffff] Aug 19 08:15:06.190611 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 19 08:15:06.190630 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 19 08:15:06.190649 kernel: vgaarb: loaded Aug 19 08:15:06.190668 kernel: clocksource: Switched to clocksource kvm-clock Aug 19 08:15:06.190687 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:15:06.190707 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:15:06.190726 kernel: pnp: PnP ACPI init Aug 19 08:15:06.190746 kernel: pnp: PnP ACPI: found 7 devices Aug 19 08:15:06.190771 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:15:06.190790 kernel: NET: Registered PF_INET protocol family Aug 19 08:15:06.190806 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 19 08:15:06.190825 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 19 08:15:06.190844 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:15:06.190863 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 08:15:06.190882 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 19 08:15:06.190901 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 19 08:15:06.190920 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 19 08:15:06.190942 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 19 08:15:06.190961 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:15:06.190979 kernel: NET: Registered PF_XDP protocol family Aug 19 08:15:06.191200 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 19 08:15:06.191371 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 19 08:15:06.191538 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 19 08:15:06.191703 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 19 08:15:06.191894 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 19 08:15:06.191972 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:15:06.191993 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 19 08:15:06.192031 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 19 08:15:06.192050 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 19 08:15:06.192069 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 19 08:15:06.192088 kernel: clocksource: Switched to clocksource tsc Aug 19 08:15:06.192107 kernel: Initialise system trusted keyrings Aug 19 08:15:06.192133 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 19 08:15:06.192157 kernel: Key type asymmetric registered Aug 19 08:15:06.192176 kernel: Asymmetric key parser 'x509' registered Aug 19 08:15:06.192194 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:15:06.192213 kernel: io scheduler mq-deadline registered Aug 19 08:15:06.192232 kernel: io scheduler kyber registered Aug 19 08:15:06.192251 kernel: io scheduler bfq registered Aug 19 08:15:06.192270 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:15:06.192290 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 19 08:15:06.192490 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 19 08:15:06.192519 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 19 08:15:06.192713 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 19 08:15:06.192738 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 19 08:15:06.192922 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 19 08:15:06.192946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:15:06.192965 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:15:06.192984 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 19 08:15:06.195052 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 19 08:15:06.195085 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 19 08:15:06.195327 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 19 08:15:06.195355 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 19 08:15:06.195374 kernel: i8042: Warning: Keylock active Aug 19 08:15:06.195393 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 19 08:15:06.195412 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 19 08:15:06.195599 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 19 08:15:06.195773 kernel: rtc_cmos 00:00: registered as rtc0 Aug 19 08:15:06.195951 kernel: rtc_cmos 00:00: setting system clock to 2025-08-19T08:15:05 UTC (1755591305) Aug 19 08:15:06.196153 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 19 08:15:06.196179 kernel: intel_pstate: CPU model not supported Aug 19 08:15:06.196196 kernel: pstore: Using crash dump compression: deflate Aug 19 08:15:06.196212 kernel: pstore: Registered efi_pstore as persistent store backend Aug 19 08:15:06.196229 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:15:06.196246 kernel: Segment Routing with IPv6 Aug 19 08:15:06.196262 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:15:06.196279 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:15:06.196301 kernel: Key type dns_resolver registered Aug 19 08:15:06.196317 kernel: IPI shorthand broadcast: enabled Aug 19 08:15:06.196334 kernel: sched_clock: Marking stable (4059004480, 150192986)->(4338771531, -129574065) Aug 19 08:15:06.196351 kernel: registered taskstats version 1 Aug 19 08:15:06.196369 kernel: Loading compiled-in X.509 certificates Aug 19 08:15:06.196385 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:15:06.196402 kernel: Demotion targets for Node 0: null Aug 19 08:15:06.196419 kernel: Key type .fscrypt registered Aug 19 08:15:06.196435 kernel: Key type fscrypt-provisioning registered Aug 19 08:15:06.196456 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:15:06.196471 kernel: ima: No architecture policies found Aug 19 08:15:06.196487 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 19 08:15:06.196505 kernel: clk: Disabling unused clocks Aug 19 08:15:06.196532 kernel: Warning: unable to open an initial console. Aug 19 08:15:06.196549 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:15:06.196566 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:15:06.196584 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:15:06.196606 kernel: Run /init as init process Aug 19 08:15:06.196624 kernel: with arguments: Aug 19 08:15:06.196642 kernel: /init Aug 19 08:15:06.196659 kernel: with environment: Aug 19 08:15:06.196677 kernel: HOME=/ Aug 19 08:15:06.196694 kernel: TERM=linux Aug 19 08:15:06.196712 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:15:06.196731 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:15:06.196758 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:15:06.196777 systemd[1]: Detected virtualization google. Aug 19 08:15:06.197895 systemd[1]: Detected architecture x86-64. Aug 19 08:15:06.197919 systemd[1]: Running in initrd. Aug 19 08:15:06.197940 systemd[1]: No hostname configured, using default hostname. Aug 19 08:15:06.197961 systemd[1]: Hostname set to . Aug 19 08:15:06.197981 systemd[1]: Initializing machine ID from random generator. Aug 19 08:15:06.198001 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:15:06.198044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:15:06.198081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:15:06.198106 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:15:06.198139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:15:06.198160 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:15:06.198186 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:15:06.198207 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:15:06.198225 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:15:06.198245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:15:06.198265 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:15:06.198283 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:15:06.198302 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:15:06.198322 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:15:06.198347 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:15:06.198368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:15:06.198387 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:15:06.198409 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:15:06.198429 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:15:06.198450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:15:06.198473 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:15:06.198492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:15:06.198517 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:15:06.198537 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:15:06.198558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:15:06.198578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:15:06.198599 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:15:06.198620 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:15:06.198641 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:15:06.198661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:15:06.198681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:06.198706 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:15:06.198728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:15:06.198748 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:15:06.198807 systemd-journald[207]: Collecting audit messages is disabled. Aug 19 08:15:06.198855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:15:06.198878 systemd-journald[207]: Journal started Aug 19 08:15:06.198922 systemd-journald[207]: Runtime Journal (/run/log/journal/e61ec4c69d5b46af8b1d113efb5d6825) is 8M, max 148.9M, 140.9M free. Aug 19 08:15:06.174254 systemd-modules-load[208]: Inserted module 'overlay' Aug 19 08:15:06.204307 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:15:06.212206 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:15:06.218332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:15:06.225182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:15:06.234726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:06.245141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:15:06.245470 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:15:06.249139 kernel: Bridge firewalling registered Aug 19 08:15:06.249903 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:15:06.250256 systemd-modules-load[208]: Inserted module 'br_netfilter' Aug 19 08:15:06.258417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:15:06.263642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:15:06.267714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:15:06.273132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:15:06.292482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:15:06.295783 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:15:06.314270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:15:06.320571 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:15:06.357471 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:15:06.370336 systemd-resolved[238]: Positive Trust Anchors: Aug 19 08:15:06.370908 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:15:06.370983 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:15:06.379920 systemd-resolved[238]: Defaulting to hostname 'linux'. Aug 19 08:15:06.384289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:15:06.393404 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:15:06.480066 kernel: SCSI subsystem initialized Aug 19 08:15:06.493053 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:15:06.505056 kernel: iscsi: registered transport (tcp) Aug 19 08:15:06.533317 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:15:06.533428 kernel: QLogic iSCSI HBA Driver Aug 19 08:15:06.559291 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:15:06.580441 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:15:06.590373 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:15:06.653358 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:15:06.656654 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:15:06.720064 kernel: raid6: avx2x4 gen() 21417 MB/s Aug 19 08:15:06.737051 kernel: raid6: avx2x2 gen() 22940 MB/s Aug 19 08:15:06.754483 kernel: raid6: avx2x1 gen() 20518 MB/s Aug 19 08:15:06.754572 kernel: raid6: using algorithm avx2x2 gen() 22940 MB/s Aug 19 08:15:06.772428 kernel: raid6: .... xor() 18441 MB/s, rmw enabled Aug 19 08:15:06.772507 kernel: raid6: using avx2x2 recovery algorithm Aug 19 08:15:06.796057 kernel: xor: automatically using best checksumming function avx Aug 19 08:15:06.983057 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:15:06.992504 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:15:06.996136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:15:07.028482 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 19 08:15:07.038168 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:15:07.043345 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:15:07.076559 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 19 08:15:07.111355 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:15:07.115663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:15:07.213974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:15:07.223206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:15:07.335203 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Aug 19 08:15:07.345070 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:15:07.372050 kernel: AES CTR mode by8 optimization enabled Aug 19 08:15:07.382043 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 19 08:15:07.469624 kernel: scsi host0: Virtio SCSI HBA Aug 19 08:15:07.484081 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 19 08:15:07.515981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:15:07.516255 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:07.527161 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 19 08:15:07.527502 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 19 08:15:07.527718 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 19 08:15:07.527934 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 19 08:15:07.528394 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 19 08:15:07.519696 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:07.533343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:07.536806 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:07.545646 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 08:15:07.545703 kernel: GPT:17805311 != 25165823 Aug 19 08:15:07.545728 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 08:15:07.545751 kernel: GPT:17805311 != 25165823 Aug 19 08:15:07.545772 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 08:15:07.545795 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 19 08:15:07.547757 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 19 08:15:07.574062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:07.633279 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 19 08:15:07.649687 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:15:07.673838 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 19 08:15:07.685056 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 19 08:15:07.685387 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 19 08:15:07.706743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 19 08:15:07.710290 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:15:07.716168 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:15:07.720147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:15:07.725652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:15:07.733087 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:15:07.750960 disk-uuid[610]: Primary Header is updated. Aug 19 08:15:07.750960 disk-uuid[610]: Secondary Entries is updated. Aug 19 08:15:07.750960 disk-uuid[610]: Secondary Header is updated. Aug 19 08:15:07.762501 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:15:07.774048 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 19 08:15:07.805048 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 19 08:15:08.821645 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 19 08:15:08.821754 disk-uuid[611]: The operation has completed successfully. Aug 19 08:15:08.898116 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:15:08.898302 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:15:08.952822 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:15:08.978435 sh[632]: Success Aug 19 08:15:09.001057 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:15:09.001154 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:15:09.003054 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:15:09.014060 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 19 08:15:09.096472 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:15:09.102280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:15:09.121711 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:15:09.142043 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (644) Aug 19 08:15:09.145384 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:15:09.145454 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:09.145479 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:15:09.171202 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:15:09.172144 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:15:09.175581 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:15:09.177873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:15:09.188474 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:15:09.231428 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (677) Aug 19 08:15:09.231502 kernel: BTRFS info (device sda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:09.233830 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:09.233891 kernel: BTRFS info (device sda6): using free-space-tree Aug 19 08:15:09.247058 kernel: BTRFS info (device sda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:09.247494 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:15:09.258248 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:15:09.342257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:15:09.346040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:15:09.430550 systemd-networkd[813]: lo: Link UP Aug 19 08:15:09.430569 systemd-networkd[813]: lo: Gained carrier Aug 19 08:15:09.440416 systemd-networkd[813]: Enumeration completed Aug 19 08:15:09.444373 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:09.444384 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:15:09.445202 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:15:09.447744 systemd-networkd[813]: eth0: Link UP Aug 19 08:15:09.451984 systemd-networkd[813]: eth0: Gained carrier Aug 19 08:15:09.452028 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:09.454488 systemd[1]: Reached target network.target - Network. Aug 19 08:15:09.461389 systemd-networkd[813]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 19 08:15:09.541063 ignition[740]: Ignition 2.21.0 Aug 19 08:15:09.541465 ignition[740]: Stage: fetch-offline Aug 19 08:15:09.544258 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:15:09.541521 ignition[740]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:09.550291 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 19 08:15:09.541536 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:09.541663 ignition[740]: parsed url from cmdline: "" Aug 19 08:15:09.541668 ignition[740]: no config URL provided Aug 19 08:15:09.541675 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:15:09.541685 ignition[740]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:15:09.541693 ignition[740]: failed to fetch config: resource requires networking Aug 19 08:15:09.542226 ignition[740]: Ignition finished successfully Aug 19 08:15:09.596482 ignition[822]: Ignition 2.21.0 Aug 19 08:15:09.596499 ignition[822]: Stage: fetch Aug 19 08:15:09.597583 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:09.597598 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:09.597725 ignition[822]: parsed url from cmdline: "" Aug 19 08:15:09.597732 ignition[822]: no config URL provided Aug 19 08:15:09.597741 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:15:09.597754 ignition[822]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:15:09.611934 unknown[822]: fetched base config from "system" Aug 19 08:15:09.597804 ignition[822]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 19 08:15:09.611945 unknown[822]: fetched base config from "system" Aug 19 08:15:09.601567 ignition[822]: GET result: OK Aug 19 08:15:09.611954 unknown[822]: fetched user config from "gcp" Aug 19 08:15:09.601693 ignition[822]: parsing config with SHA512: 8e3037549c1310092dfebeadd7fcf2d8b80eda0a009d49573d5dc652f7739d11fb49fa318ba0e81bad22485fdaa3e01eb971e628a3b29cd49d42aab9adaa4373 Aug 19 08:15:09.615439 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 19 08:15:09.612908 ignition[822]: fetch: fetch complete Aug 19 08:15:09.623692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:15:09.612917 ignition[822]: fetch: fetch passed Aug 19 08:15:09.612989 ignition[822]: Ignition finished successfully Aug 19 08:15:09.660350 ignition[829]: Ignition 2.21.0 Aug 19 08:15:09.660367 ignition[829]: Stage: kargs Aug 19 08:15:09.660618 ignition[829]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:09.665368 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:15:09.660636 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:09.671772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:15:09.663083 ignition[829]: kargs: kargs passed Aug 19 08:15:09.663272 ignition[829]: Ignition finished successfully Aug 19 08:15:09.706842 ignition[836]: Ignition 2.21.0 Aug 19 08:15:09.706859 ignition[836]: Stage: disks Aug 19 08:15:09.707107 ignition[836]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:09.711276 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:15:09.707126 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:09.715495 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:15:09.709042 ignition[836]: disks: disks passed Aug 19 08:15:09.721138 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:15:09.709125 ignition[836]: Ignition finished successfully Aug 19 08:15:09.726122 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:15:09.730136 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:15:09.734241 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:15:09.738854 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:15:09.779776 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Aug 19 08:15:09.789213 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:15:09.797039 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:15:09.975039 kernel: EXT4-fs (sda9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:15:09.976648 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:15:09.979792 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:15:09.984238 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:15:09.994977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:15:10.000857 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 08:15:10.000952 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:15:10.016500 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (853) Aug 19 08:15:10.016539 kernel: BTRFS info (device sda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:10.016565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:10.016587 kernel: BTRFS info (device sda6): using free-space-tree Aug 19 08:15:10.000994 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:15:10.017499 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:15:10.023897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:15:10.031584 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:15:10.148656 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:15:10.157546 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:15:10.165636 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:15:10.172216 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:15:10.329680 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:15:10.332055 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:15:10.348200 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:15:10.360250 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:15:10.363067 kernel: BTRFS info (device sda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:10.403744 ignition[965]: INFO : Ignition 2.21.0 Aug 19 08:15:10.403744 ignition[965]: INFO : Stage: mount Aug 19 08:15:10.403744 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:10.403744 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:10.403744 ignition[965]: INFO : mount: mount passed Aug 19 08:15:10.403744 ignition[965]: INFO : Ignition finished successfully Aug 19 08:15:10.405779 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:15:10.406857 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:15:10.414841 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:15:10.440693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:15:10.471067 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (979) Aug 19 08:15:10.473901 kernel: BTRFS info (device sda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:10.473970 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:10.473996 kernel: BTRFS info (device sda6): using free-space-tree Aug 19 08:15:10.482614 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:15:10.519125 ignition[996]: INFO : Ignition 2.21.0 Aug 19 08:15:10.519125 ignition[996]: INFO : Stage: files Aug 19 08:15:10.525136 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:10.525136 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:10.525136 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:15:10.525136 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:15:10.525136 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:15:10.536144 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:15:10.536144 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:15:10.536144 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:15:10.536144 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 19 08:15:10.536144 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 19 08:15:10.528328 unknown[996]: wrote ssh authorized keys file for user: core Aug 19 08:15:10.651634 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:15:11.077852 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 19 08:15:11.077852 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:11.087165 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 19 08:15:11.433337 systemd-networkd[813]: eth0: Gained IPv6LL Aug 19 08:15:11.533285 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 19 08:15:11.975819 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:11.975819 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 19 08:15:11.986178 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:15:11.986178 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:15:11.986178 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 19 08:15:11.986178 ignition[996]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:15:11.986178 ignition[996]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:15:11.986178 ignition[996]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:15:11.986178 ignition[996]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:15:11.986178 ignition[996]: INFO : files: files passed Aug 19 08:15:11.986178 ignition[996]: INFO : Ignition finished successfully Aug 19 08:15:11.985750 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:15:11.989410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:15:12.000018 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:15:12.021606 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:15:12.045136 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:15:12.045136 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:15:12.021784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:15:12.061195 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:15:12.038449 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:15:12.042882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:15:12.049913 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:15:12.128911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:15:12.129109 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:15:12.134057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:15:12.136366 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:15:12.140562 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:15:12.143050 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:15:12.176516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:15:12.183646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:15:12.214969 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:15:12.215721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:15:12.219786 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:15:12.223628 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:15:12.224204 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:15:12.231653 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:15:12.234769 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:15:12.238575 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:15:12.242771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:15:12.246595 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:15:12.250564 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:15:12.254735 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:15:12.258736 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:15:12.262756 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:15:12.267573 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:15:12.271544 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:15:12.275518 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:15:12.275992 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:15:12.286213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:15:12.286754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:15:12.290415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:15:12.290571 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:15:12.294430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:15:12.294686 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:15:12.305217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:15:12.305662 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:15:12.309561 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:15:12.309782 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:15:12.318926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:15:12.341825 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:15:12.345394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:15:12.345682 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:15:12.349962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:15:12.350719 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:15:12.373094 ignition[1050]: INFO : Ignition 2.21.0 Aug 19 08:15:12.373344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:15:12.373527 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:15:12.383921 ignition[1050]: INFO : Stage: umount Aug 19 08:15:12.383921 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:12.383921 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 19 08:15:12.388375 ignition[1050]: INFO : umount: umount passed Aug 19 08:15:12.388375 ignition[1050]: INFO : Ignition finished successfully Aug 19 08:15:12.387652 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:15:12.387814 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:15:12.394279 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:15:12.395720 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:15:12.395835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:15:12.402376 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:15:12.402462 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:15:12.405356 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 19 08:15:12.405528 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 19 08:15:12.409426 systemd[1]: Stopped target network.target - Network. Aug 19 08:15:12.413357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:15:12.413555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:15:12.417354 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:15:12.421278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:15:12.425124 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:15:12.425382 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:15:12.429310 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:15:12.433457 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:15:12.433660 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:15:12.437452 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:15:12.437518 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:15:12.441339 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:15:12.441537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:15:12.445498 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:15:12.445723 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:15:12.450226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:15:12.454539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:15:12.460622 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:15:12.461042 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:15:12.464850 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:15:12.465029 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:15:12.475040 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:15:12.475319 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:15:12.475449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:15:12.481263 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:15:12.483863 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:15:12.488189 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:15:12.488264 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:15:12.494218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:15:12.494334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:15:12.499863 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:15:12.510149 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:15:12.510265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:15:12.514365 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:15:12.514456 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:15:12.523472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:15:12.523565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:15:12.529364 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:15:12.529464 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:15:12.532709 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:15:12.541000 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:15:12.541373 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:12.545904 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:15:12.546830 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:15:12.554929 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:15:12.555080 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:15:12.561230 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:15:12.561299 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:15:12.568176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:15:12.568284 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:15:12.575479 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:15:12.575563 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:15:12.589148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:15:12.589405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:15:12.598944 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:15:12.608141 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:15:12.608260 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:15:12.611524 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:15:12.611619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:15:12.623675 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 19 08:15:12.623786 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:15:12.628496 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:15:12.628578 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:15:12.634231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:15:12.634325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:12.642062 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 08:15:12.642137 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 19 08:15:12.714030 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Aug 19 08:15:12.642180 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 08:15:12.642230 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:12.642727 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:15:12.642851 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:15:12.647593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:15:12.647796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:15:12.651515 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:15:12.656553 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:15:12.682187 systemd[1]: Switching root. Aug 19 08:15:12.740142 systemd-journald[207]: Journal stopped Aug 19 08:15:14.676355 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:15:14.676428 kernel: SELinux: policy capability open_perms=1 Aug 19 08:15:14.676450 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:15:14.676480 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:15:14.676498 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:15:14.676514 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:15:14.676540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:15:14.676561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:15:14.676580 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:15:14.676599 kernel: audit: type=1403 audit(1755591313.301:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:15:14.676621 systemd[1]: Successfully loaded SELinux policy in 65.034ms. Aug 19 08:15:14.676644 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.280ms. Aug 19 08:15:14.676665 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:15:14.676692 systemd[1]: Detected virtualization google. Aug 19 08:15:14.676712 systemd[1]: Detected architecture x86-64. Aug 19 08:15:14.676731 systemd[1]: Detected first boot. Aug 19 08:15:14.676752 systemd[1]: Initializing machine ID from random generator. Aug 19 08:15:14.676774 zram_generator::config[1094]: No configuration found. Aug 19 08:15:14.676801 kernel: Guest personality initialized and is inactive Aug 19 08:15:14.676822 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 19 08:15:14.676842 kernel: Initialized host personality Aug 19 08:15:14.676861 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:15:14.676886 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:15:14.676908 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:15:14.676928 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:15:14.676953 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:15:14.676975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:15:14.676997 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:15:14.677042 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:15:14.677065 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:15:14.677083 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:15:14.677102 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:15:14.677129 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:15:14.677149 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:15:14.677167 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:15:14.677187 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:15:14.677208 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:15:14.677230 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:15:14.677252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:15:14.677275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:15:14.677304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:15:14.677329 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:15:14.677350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:15:14.677373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:15:14.677396 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:15:14.677419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:15:14.677442 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:15:14.677476 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:15:14.677505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:15:14.677527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:15:14.677549 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:15:14.677571 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:15:14.677593 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:15:14.677616 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:15:14.677639 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:15:14.677667 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:15:14.677690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:15:14.677712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:15:14.677735 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:15:14.677757 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:15:14.677781 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:15:14.677811 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:15:14.677835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:14.677858 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:15:14.677880 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:15:14.677902 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:15:14.677925 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:15:14.677947 systemd[1]: Reached target machines.target - Containers. Aug 19 08:15:14.677970 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:15:14.677997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:14.679167 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:15:14.679201 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:15:14.679224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:14.679248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:15:14.679272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:15:14.679296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:15:14.679320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:15:14.679344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:15:14.679374 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:15:14.679396 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:15:14.679419 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:15:14.679443 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:15:14.679475 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:14.679494 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:15:14.679520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:15:14.679542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:15:14.679569 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:15:14.679591 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:15:14.679613 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:15:14.679635 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:15:14.679655 systemd[1]: Stopped verity-setup.service. Aug 19 08:15:14.679677 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:14.679698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:15:14.679720 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:15:14.679749 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:15:14.679769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:15:14.679844 systemd-journald[1165]: Collecting audit messages is disabled. Aug 19 08:15:14.679894 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:15:14.679923 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:15:14.679946 kernel: fuse: init (API version 7.41) Aug 19 08:15:14.679972 systemd-journald[1165]: Journal started Aug 19 08:15:14.680061 systemd-journald[1165]: Runtime Journal (/run/log/journal/fd38da045e2e4f4499331bc00c6363d2) is 8M, max 148.9M, 140.9M free. Aug 19 08:15:14.213925 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:15:14.238246 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 19 08:15:14.238894 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:15:14.687997 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:15:14.711501 kernel: loop: module loaded Aug 19 08:15:14.712299 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:15:14.719625 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:15:14.720381 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:15:14.727141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:14.727541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:14.730812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:15:14.732367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:15:14.735861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:15:14.737139 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:15:14.741318 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:15:14.741613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:15:14.745671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:15:14.750671 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:15:14.755735 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:15:14.769039 kernel: ACPI: bus type drm_connector registered Aug 19 08:15:14.775245 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:15:14.775640 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:15:14.788205 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:15:14.800879 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:15:14.807079 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:15:14.815158 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:15:14.823291 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:15:14.825363 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:15:14.825434 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:15:14.831997 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:15:14.844307 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:15:14.850157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:14.854264 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:15:14.860295 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:15:14.861956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:15:14.870219 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:15:14.873205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:15:14.886404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:15:14.894103 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:15:14.899314 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:15:14.907262 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:15:14.910474 systemd-journald[1165]: Time spent on flushing to /var/log/journal/fd38da045e2e4f4499331bc00c6363d2 is 176.998ms for 953 entries. Aug 19 08:15:14.910474 systemd-journald[1165]: System Journal (/var/log/journal/fd38da045e2e4f4499331bc00c6363d2) is 8M, max 584.8M, 576.8M free. Aug 19 08:15:15.109241 systemd-journald[1165]: Received client request to flush runtime journal. Aug 19 08:15:15.109341 kernel: loop0: detected capacity change from 0 to 52056 Aug 19 08:15:15.109386 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:15:14.911324 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:15:14.936379 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:15:14.940636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:15:14.953489 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:15:15.044966 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:15:15.064725 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:15:15.093493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:15:15.101267 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Aug 19 08:15:15.101294 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Aug 19 08:15:15.127435 kernel: loop1: detected capacity change from 0 to 221472 Aug 19 08:15:15.112621 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:15:15.129452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:15:15.136900 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:15:15.231241 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:15:15.238820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:15:15.239209 kernel: loop2: detected capacity change from 0 to 111000 Aug 19 08:15:15.245663 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:15:15.305745 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Aug 19 08:15:15.305786 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Aug 19 08:15:15.318055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:15:15.319334 kernel: loop3: detected capacity change from 0 to 128016 Aug 19 08:15:15.416047 kernel: loop4: detected capacity change from 0 to 52056 Aug 19 08:15:15.445082 kernel: loop5: detected capacity change from 0 to 221472 Aug 19 08:15:15.510067 kernel: loop6: detected capacity change from 0 to 111000 Aug 19 08:15:15.541497 kernel: loop7: detected capacity change from 0 to 128016 Aug 19 08:15:15.591834 (sd-merge)[1242]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Aug 19 08:15:15.594068 (sd-merge)[1242]: Merged extensions into '/usr'. Aug 19 08:15:15.607468 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:15:15.607685 systemd[1]: Reloading... Aug 19 08:15:15.776071 zram_generator::config[1264]: No configuration found. Aug 19 08:15:16.136203 ldconfig[1211]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:15:16.256158 systemd[1]: Reloading finished in 647 ms. Aug 19 08:15:16.276091 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:15:16.279889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:15:16.296187 systemd[1]: Starting ensure-sysext.service... Aug 19 08:15:16.301341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:15:16.340346 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:15:16.340907 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:15:16.341219 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:15:16.341250 systemd[1]: Reloading... Aug 19 08:15:16.341798 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:15:16.342456 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:15:16.344367 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:15:16.345038 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Aug 19 08:15:16.345265 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Aug 19 08:15:16.366293 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:15:16.366319 systemd-tmpfiles[1309]: Skipping /boot Aug 19 08:15:16.393298 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:15:16.393530 systemd-tmpfiles[1309]: Skipping /boot Aug 19 08:15:16.509474 zram_generator::config[1342]: No configuration found. Aug 19 08:15:16.752781 systemd[1]: Reloading finished in 410 ms. Aug 19 08:15:16.775931 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:15:16.796294 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:15:16.815953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:15:16.823724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:15:16.830352 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:15:16.839822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:15:16.850149 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:15:16.857098 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:15:16.869512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:16.869878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:16.874599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:16.885514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:15:16.892280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:15:16.896321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:16.896718 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:16.905857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:15:16.909132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:16.919118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:16.919494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:16.919788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:16.919946 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:16.920419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:16.922556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:16.923423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:16.943551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:16.945090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:16.958465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:16.972621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:15:16.982203 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 19 08:15:16.985345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:16.985583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:16.985921 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:15:16.989328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:16.993204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:15:16.993554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:15:16.997892 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:15:17.002106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:15:17.008671 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:15:17.026945 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Aug 19 08:15:17.031847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:15:17.041216 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:15:17.045514 systemd[1]: Finished ensure-sysext.service. Aug 19 08:15:17.049648 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:15:17.050466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:15:17.067836 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:15:17.071959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:17.072517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:17.096960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:15:17.104637 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:15:17.123111 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 19 08:15:17.125352 augenrules[1422]: No rules Aug 19 08:15:17.126548 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:15:17.129682 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:15:17.130159 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:15:17.140093 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Aug 19 08:15:17.174695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:15:17.187056 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:15:17.207433 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:15:17.216157 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:15:17.248111 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Aug 19 08:15:17.343263 systemd-resolved[1381]: Positive Trust Anchors: Aug 19 08:15:17.344330 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:15:17.344661 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:15:17.361469 systemd-resolved[1381]: Defaulting to hostname 'linux'. Aug 19 08:15:17.367295 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:15:17.378123 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:15:17.388221 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:15:17.397378 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:15:17.407282 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:15:17.417192 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:15:17.427490 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:15:17.436492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:15:17.447333 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:15:17.459220 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:15:17.459314 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:15:17.467214 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:15:17.480316 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:15:17.493509 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:15:17.511933 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:15:17.524430 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:15:17.535160 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:15:17.555648 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:15:17.566539 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:15:17.579624 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:15:17.593282 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Aug 19 08:15:17.598603 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:15:17.600685 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:15:17.611117 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:15:17.620230 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Aug 19 08:15:17.623099 systemd-networkd[1448]: lo: Link UP Aug 19 08:15:17.623112 systemd-networkd[1448]: lo: Gained carrier Aug 19 08:15:17.629172 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:15:17.629234 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:15:17.630063 systemd-networkd[1448]: Enumeration completed Aug 19 08:15:17.630668 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:17.630676 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:15:17.634383 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 19 08:15:17.635597 systemd-networkd[1448]: eth0: Link UP Aug 19 08:15:17.636730 systemd-networkd[1448]: eth0: Gained carrier Aug 19 08:15:17.636771 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:17.647391 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:15:17.648540 systemd-networkd[1448]: eth0: DHCPv4 address 10.128.0.35/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 19 08:15:17.658921 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:15:17.676239 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:15:17.692524 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:15:17.701175 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:15:17.706046 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:15:17.718903 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:15:17.732364 systemd[1]: Started ntpd.service - Network Time Service. Aug 19 08:15:17.746904 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:15:17.770322 jq[1490]: false Aug 19 08:15:17.767431 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:15:17.782062 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing passwd entry cache Aug 19 08:15:17.778523 oslogin_cache_refresh[1492]: Refreshing passwd entry cache Aug 19 08:15:17.789387 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:15:17.795304 extend-filesystems[1491]: Found /dev/sda6 Aug 19 08:15:17.812330 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:15:17.819049 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting users, quitting Aug 19 08:15:17.819049 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:15:17.819049 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing group entry cache Aug 19 08:15:17.815872 oslogin_cache_refresh[1492]: Failure getting users, quitting Aug 19 08:15:17.815958 oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:15:17.816132 oslogin_cache_refresh[1492]: Refreshing group entry cache Aug 19 08:15:17.828751 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 19 08:15:17.831206 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:15:17.839036 extend-filesystems[1491]: Found /dev/sda9 Aug 19 08:15:17.837683 oslogin_cache_refresh[1492]: Failure getting groups, quitting Aug 19 08:15:17.839277 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting groups, quitting Aug 19 08:15:17.839277 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:15:17.837730 oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:15:17.840750 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:15:17.868233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 19 08:15:17.868391 extend-filesystems[1491]: Checking size of /dev/sda9 Aug 19 08:15:17.924460 kernel: ACPI: button: Power Button [PWRF] Aug 19 08:15:17.924525 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Aug 19 08:15:17.876217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:15:17.924686 extend-filesystems[1491]: Resized partition /dev/sda9 Aug 19 08:15:17.972212 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 19 08:15:17.978859 kernel: ACPI: button: Sleep Button [SLPF] Aug 19 08:15:17.978919 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 19 08:15:17.979486 extend-filesystems[1519]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 08:15:17.979486 extend-filesystems[1519]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 19 08:15:17.979486 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 19 08:15:17.979486 extend-filesystems[1519]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 19 08:15:17.945324 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.981 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.991 INFO Fetch successful Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.992 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.994 INFO Fetch successful Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.997 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.999 INFO Fetch successful Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.999 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Aug 19 08:15:18.072929 coreos-metadata[1487]: Aug 19 08:15:17.999 INFO Fetch successful Aug 19 08:15:18.057990 ntpd[1496]: ntpd 4.2.8p17@1.4004-o Mon Aug 18 21:32:19 UTC 2025 (1): Starting Aug 19 08:15:18.074857 extend-filesystems[1491]: Resized filesystem in /dev/sda9 Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: ntpd 4.2.8p17@1.4004-o Mon Aug 18 21:32:19 UTC 2025 (1): Starting Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: ---------------------------------------------------- Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: ntp-4 is maintained by Network Time Foundation, Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: corporation. Support and training for ntp-4 are Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: available at https://www.nwtime.org/support Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: ---------------------------------------------------- Aug 19 08:15:18.102380 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: proto: precision = 0.111 usec (-23) Aug 19 08:15:17.967064 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:15:18.067096 ntpd[1496]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 19 08:15:17.983118 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:15:18.128749 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: basedate set to 2025-08-06 Aug 19 08:15:18.128749 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: gps base set to 2025-08-10 (week 2379) Aug 19 08:15:18.067122 ntpd[1496]: ---------------------------------------------------- Aug 19 08:15:17.983463 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:15:18.129066 update_engine[1508]: I20250819 08:15:18.125577 1508 main.cc:92] Flatcar Update Engine starting Aug 19 08:15:18.067138 ntpd[1496]: ntp-4 is maintained by Network Time Foundation, Aug 19 08:15:17.983967 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:15:18.129695 jq[1514]: true Aug 19 08:15:18.067151 ntpd[1496]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 19 08:15:17.985891 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:15:18.067167 ntpd[1496]: corporation. Support and training for ntp-4 are Aug 19 08:15:18.149614 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Listen and drop on 0 v6wildcard [::]:123 Aug 19 08:15:18.149614 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 19 08:15:17.995902 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:15:18.149773 jq[1526]: true Aug 19 08:15:18.067181 ntpd[1496]: available at https://www.nwtime.org/support Aug 19 08:15:17.997353 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:15:18.067195 ntpd[1496]: ---------------------------------------------------- Aug 19 08:15:18.006736 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:15:18.100616 ntpd[1496]: proto: precision = 0.111 usec (-23) Aug 19 08:15:18.008276 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:15:18.115955 ntpd[1496]: basedate set to 2025-08-06 Aug 19 08:15:18.019632 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:15:18.115987 ntpd[1496]: gps base set to 2025-08-10 (week 2379) Aug 19 08:15:18.021161 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:15:18.148177 ntpd[1496]: Listen and drop on 0 v6wildcard [::]:123 Aug 19 08:15:18.140983 systemd[1]: Reached target network.target - Network. Aug 19 08:15:18.148266 ntpd[1496]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 19 08:15:18.155101 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:15:18.160423 ntpd[1496]: Listen normally on 2 lo 127.0.0.1:123 Aug 19 08:15:18.160529 ntpd[1496]: Listen normally on 3 eth0 10.128.0.35:123 Aug 19 08:15:18.160624 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Listen normally on 2 lo 127.0.0.1:123 Aug 19 08:15:18.160624 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Listen normally on 3 eth0 10.128.0.35:123 Aug 19 08:15:18.160624 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Listen normally on 4 lo [::1]:123 Aug 19 08:15:18.160605 ntpd[1496]: Listen normally on 4 lo [::1]:123 Aug 19 08:15:18.160811 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: bind(21) AF_INET6 fe80::4001:aff:fe80:23%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 08:15:18.160811 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:23%2#123 Aug 19 08:15:18.160811 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: failed to init interface for address fe80::4001:aff:fe80:23%2 Aug 19 08:15:18.160811 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: Listening on routing socket on fd #21 for interface updates Aug 19 08:15:18.160692 ntpd[1496]: bind(21) AF_INET6 fe80::4001:aff:fe80:23%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 08:15:18.160725 ntpd[1496]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:23%2#123 Aug 19 08:15:18.160746 ntpd[1496]: failed to init interface for address fe80::4001:aff:fe80:23%2 Aug 19 08:15:18.160797 ntpd[1496]: Listening on routing socket on fd #21 for interface updates Aug 19 08:15:18.170335 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:15:18.181120 ntpd[1496]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 08:15:18.187204 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:15:18.187255 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 08:15:18.187255 ntpd[1496]: 19 Aug 08:15:18 ntpd[1496]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 08:15:18.185811 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:15:18.181176 ntpd[1496]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 19 08:15:18.315199 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 19 08:15:18.325613 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:15:18.330098 bash[1577]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:15:18.336124 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:15:18.364567 systemd[1]: Starting sshkeys.service... Aug 19 08:15:18.402579 tar[1524]: linux-amd64/helm Aug 19 08:15:18.403154 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 19 08:15:18.414779 (ntainerd)[1576]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:15:18.462989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:18.487582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 19 08:15:18.505661 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:15:18.527312 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 19 08:15:18.539486 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 19 08:15:18.591853 kernel: EDAC MC: Ver: 3.0.0 Aug 19 08:15:18.624424 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:15:18.662329 dbus-daemon[1488]: [system] SELinux support is enabled Aug 19 08:15:18.666980 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:15:18.712385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:15:18.712671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:15:18.713112 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:15:18.713337 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:15:18.725352 dbus-daemon[1488]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1448 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 19 08:15:18.739895 dbus-daemon[1488]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 19 08:15:18.751491 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 19 08:15:18.757911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:15:18.763748 systemd-logind[1505]: Watching system buttons on /dev/input/event2 (Power Button) Aug 19 08:15:18.763789 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:15:18.764744 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:15:18.766650 update_engine[1508]: I20250819 08:15:18.766103 1508 update_check_scheduler.cc:74] Next update check in 7m45s Aug 19 08:15:18.772175 systemd-logind[1505]: New seat seat0. Aug 19 08:15:18.783943 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:15:18.789806 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:15:18.863176 systemd-logind[1505]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 19 08:15:18.880936 coreos-metadata[1584]: Aug 19 08:15:18.880 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 19 08:15:18.897821 coreos-metadata[1584]: Aug 19 08:15:18.895 INFO Fetch failed with 404: resource not found Aug 19 08:15:18.898040 coreos-metadata[1584]: Aug 19 08:15:18.897 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 19 08:15:18.901804 coreos-metadata[1584]: Aug 19 08:15:18.901 INFO Fetch successful Aug 19 08:15:18.901804 coreos-metadata[1584]: Aug 19 08:15:18.901 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 19 08:15:18.912641 coreos-metadata[1584]: Aug 19 08:15:18.905 INFO Fetch failed with 404: resource not found Aug 19 08:15:18.912641 coreos-metadata[1584]: Aug 19 08:15:18.905 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 19 08:15:18.912641 coreos-metadata[1584]: Aug 19 08:15:18.908 INFO Fetch failed with 404: resource not found Aug 19 08:15:18.912641 coreos-metadata[1584]: Aug 19 08:15:18.908 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 19 08:15:18.915883 coreos-metadata[1584]: Aug 19 08:15:18.915 INFO Fetch successful Aug 19 08:15:18.920537 unknown[1584]: wrote ssh authorized keys file for user: core Aug 19 08:15:19.023704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:19.069493 ntpd[1496]: bind(24) AF_INET6 fe80::4001:aff:fe80:23%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 08:15:19.072065 ntpd[1496]: 19 Aug 08:15:19 ntpd[1496]: bind(24) AF_INET6 fe80::4001:aff:fe80:23%2#123 flags 0x11 failed: Cannot assign requested address Aug 19 08:15:19.072065 ntpd[1496]: 19 Aug 08:15:19 ntpd[1496]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:23%2#123 Aug 19 08:15:19.072065 ntpd[1496]: 19 Aug 08:15:19 ntpd[1496]: failed to init interface for address fe80::4001:aff:fe80:23%2 Aug 19 08:15:19.071477 ntpd[1496]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:23%2#123 Aug 19 08:15:19.071503 ntpd[1496]: failed to init interface for address fe80::4001:aff:fe80:23%2 Aug 19 08:15:19.106175 update-ssh-keys[1601]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:15:19.107866 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 19 08:15:19.126222 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:15:19.135215 systemd[1]: Finished sshkeys.service. Aug 19 08:15:19.239220 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 19 08:15:19.259412 dbus-daemon[1488]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 19 08:15:19.264861 dbus-daemon[1488]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1595 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 19 08:15:19.271512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:15:19.294133 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:15:19.306655 systemd[1]: Starting polkit.service - Authorization Manager... Aug 19 08:15:19.350875 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:15:19.351281 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:15:19.356368 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:15:19.367052 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:15:19.416859 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:15:19.435217 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:15:19.447782 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:15:19.457528 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:15:19.524173 containerd[1576]: time="2025-08-19T08:15:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:15:19.525364 containerd[1576]: time="2025-08-19T08:15:19.525231710Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:15:19.538960 polkitd[1625]: Started polkitd version 126 Aug 19 08:15:19.552289 containerd[1576]: time="2025-08-19T08:15:19.552231957Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.233µs" Aug 19 08:15:19.552486 containerd[1576]: time="2025-08-19T08:15:19.552458193Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:15:19.552593 containerd[1576]: time="2025-08-19T08:15:19.552573031Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:15:19.552907 containerd[1576]: time="2025-08-19T08:15:19.552876129Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:15:19.553064 containerd[1576]: time="2025-08-19T08:15:19.553038412Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:15:19.553289 polkitd[1625]: Loading rules from directory /etc/polkit-1/rules.d Aug 19 08:15:19.553900 containerd[1576]: time="2025-08-19T08:15:19.553454462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:15:19.553900 containerd[1576]: time="2025-08-19T08:15:19.553569385Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:15:19.553900 containerd[1576]: time="2025-08-19T08:15:19.553589202Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:15:19.553977 polkitd[1625]: Loading rules from directory /run/polkit-1/rules.d Aug 19 08:15:19.554078 polkitd[1625]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 19 08:15:19.554760 polkitd[1625]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 19 08:15:19.556427 systemd[1]: Started polkit.service - Authorization Manager. Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556133913Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556167784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556191209Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556205118Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556347108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556794691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556847602Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556866239Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:15:19.557466 containerd[1576]: time="2025-08-19T08:15:19.556918335Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:15:19.554869 polkitd[1625]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 19 08:15:19.554930 polkitd[1625]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 19 08:15:19.555879 polkitd[1625]: Finished loading, compiling and executing 2 rules Aug 19 08:15:19.557308 dbus-daemon[1488]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 19 08:15:19.557749 polkitd[1625]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 19 08:15:19.558926 containerd[1576]: time="2025-08-19T08:15:19.558591017Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:15:19.558926 containerd[1576]: time="2025-08-19T08:15:19.558726412Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:15:19.566569 containerd[1576]: time="2025-08-19T08:15:19.566517624Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:15:19.566675 containerd[1576]: time="2025-08-19T08:15:19.566603335Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:15:19.566675 containerd[1576]: time="2025-08-19T08:15:19.566628482Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:15:19.566675 containerd[1576]: time="2025-08-19T08:15:19.566647488Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:15:19.566675 containerd[1576]: time="2025-08-19T08:15:19.566666790Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566683813Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566707064Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566726356Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566744209Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566760831Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566776515Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:15:19.566831 containerd[1576]: time="2025-08-19T08:15:19.566796680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:15:19.567578 containerd[1576]: time="2025-08-19T08:15:19.567532436Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567588497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567617026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567637985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567656213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567675161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567694140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567712100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567732248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567752826Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567773031Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567872900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:15:19.567890 containerd[1576]: time="2025-08-19T08:15:19.567895663Z" level=info msg="Start snapshots syncer" Aug 19 08:15:19.568417 containerd[1576]: time="2025-08-19T08:15:19.567929382Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:15:19.568466 containerd[1576]: time="2025-08-19T08:15:19.568419490Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:15:19.568649 containerd[1576]: time="2025-08-19T08:15:19.568511419Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:15:19.568712 containerd[1576]: time="2025-08-19T08:15:19.568644857Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.568877703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.568922704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.568941550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.568959078Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.568978358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.568995676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569033340Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569070726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569089222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569107892Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569160300Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569183284Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:15:19.569985 containerd[1576]: time="2025-08-19T08:15:19.569197177Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569271091Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569286919Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569303612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569321057Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569349741Z" level=info msg="runtime interface created" Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569359743Z" level=info msg="created NRI interface" Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569373792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569392741Z" level=info msg="Connect containerd service" Aug 19 08:15:19.570633 containerd[1576]: time="2025-08-19T08:15:19.569433262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:15:19.570983 containerd[1576]: time="2025-08-19T08:15:19.570703835Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:15:19.603136 systemd-hostnamed[1595]: Hostname set to (transient) Aug 19 08:15:19.605581 systemd-resolved[1381]: System hostname changed to 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal'. Aug 19 08:15:19.627824 systemd-networkd[1448]: eth0: Gained IPv6LL Aug 19 08:15:19.641135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:15:19.652084 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:15:19.686675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:19.700927 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:15:19.714625 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Aug 19 08:15:19.766700 init.sh[1654]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 19 08:15:19.771352 init.sh[1654]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 19 08:15:19.772478 init.sh[1654]: + /usr/bin/google_instance_setup Aug 19 08:15:19.832683 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:15:19.883258 containerd[1576]: time="2025-08-19T08:15:19.882858081Z" level=info msg="Start subscribing containerd event" Aug 19 08:15:19.883471 containerd[1576]: time="2025-08-19T08:15:19.883391291Z" level=info msg="Start recovering state" Aug 19 08:15:19.883594 containerd[1576]: time="2025-08-19T08:15:19.883569939Z" level=info msg="Start event monitor" Aug 19 08:15:19.883649 containerd[1576]: time="2025-08-19T08:15:19.883602315Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:15:19.883649 containerd[1576]: time="2025-08-19T08:15:19.883623598Z" level=info msg="Start streaming server" Aug 19 08:15:19.883649 containerd[1576]: time="2025-08-19T08:15:19.883637199Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:15:19.883762 containerd[1576]: time="2025-08-19T08:15:19.883649317Z" level=info msg="runtime interface starting up..." Aug 19 08:15:19.883762 containerd[1576]: time="2025-08-19T08:15:19.883659304Z" level=info msg="starting plugins..." Aug 19 08:15:19.883762 containerd[1576]: time="2025-08-19T08:15:19.883681796Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:15:19.885600 containerd[1576]: time="2025-08-19T08:15:19.885430226Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:15:19.885600 containerd[1576]: time="2025-08-19T08:15:19.885561392Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:15:19.885875 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:15:19.886440 containerd[1576]: time="2025-08-19T08:15:19.885711966Z" level=info msg="containerd successfully booted in 0.363518s" Aug 19 08:15:19.914541 tar[1524]: linux-amd64/LICENSE Aug 19 08:15:19.916702 tar[1524]: linux-amd64/README.md Aug 19 08:15:19.939316 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:15:20.297148 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:15:20.309789 systemd[1]: Started sshd@0-10.128.0.35:22-147.75.109.163:44594.service - OpenSSH per-connection server daemon (147.75.109.163:44594). Aug 19 08:15:20.391956 instance-setup[1658]: INFO Running google_set_multiqueue. Aug 19 08:15:20.418699 instance-setup[1658]: INFO Set channels for eth0 to 2. Aug 19 08:15:20.424627 instance-setup[1658]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 19 08:15:20.427020 instance-setup[1658]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 19 08:15:20.427224 instance-setup[1658]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 19 08:15:20.429267 instance-setup[1658]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 19 08:15:20.429528 instance-setup[1658]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 19 08:15:20.431274 instance-setup[1658]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 19 08:15:20.431492 instance-setup[1658]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 19 08:15:20.433164 instance-setup[1658]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 19 08:15:20.443588 instance-setup[1658]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 19 08:15:20.448574 instance-setup[1658]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 19 08:15:20.450342 instance-setup[1658]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 19 08:15:20.450406 instance-setup[1658]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 19 08:15:20.473809 init.sh[1654]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 19 08:15:20.658160 startup-script[1706]: INFO Starting startup scripts. Aug 19 08:15:20.663490 startup-script[1706]: INFO No startup scripts found in metadata. Aug 19 08:15:20.663567 startup-script[1706]: INFO Finished running startup scripts. Aug 19 08:15:20.686825 init.sh[1654]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 19 08:15:20.686825 init.sh[1654]: + daemon_pids=() Aug 19 08:15:20.686825 init.sh[1654]: + for d in accounts clock_skew network Aug 19 08:15:20.687699 init.sh[1709]: + /usr/bin/google_accounts_daemon Aug 19 08:15:20.688585 init.sh[1654]: + daemon_pids+=($!) Aug 19 08:15:20.688585 init.sh[1654]: + for d in accounts clock_skew network Aug 19 08:15:20.688585 init.sh[1654]: + daemon_pids+=($!) Aug 19 08:15:20.688585 init.sh[1654]: + for d in accounts clock_skew network Aug 19 08:15:20.688914 init.sh[1654]: + daemon_pids+=($!) Aug 19 08:15:20.689152 init.sh[1710]: + /usr/bin/google_clock_skew_daemon Aug 19 08:15:20.689478 init.sh[1711]: + /usr/bin/google_network_daemon Aug 19 08:15:20.691348 init.sh[1654]: + NOTIFY_SOCKET=/run/systemd/notify Aug 19 08:15:20.691348 init.sh[1654]: + /usr/bin/systemd-notify --ready Aug 19 08:15:20.712156 sshd[1675]: Accepted publickey for core from 147.75.109.163 port 44594 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:20.723230 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:20.726774 systemd[1]: Started oem-gce.service - GCE Linux Agent. Aug 19 08:15:20.743439 init.sh[1654]: + wait -n 1709 1710 1711 Aug 19 08:15:20.749728 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:15:20.763424 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:15:20.809801 systemd-logind[1505]: New session 1 of user core. Aug 19 08:15:20.824426 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:15:20.844182 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:15:20.881319 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:15:20.888250 systemd-logind[1505]: New session c1 of user core. Aug 19 08:15:21.289672 systemd[1715]: Queued start job for default target default.target. Aug 19 08:15:21.295693 systemd[1715]: Created slice app.slice - User Application Slice. Aug 19 08:15:21.295742 systemd[1715]: Reached target paths.target - Paths. Aug 19 08:15:21.295808 systemd[1715]: Reached target timers.target - Timers. Aug 19 08:15:21.303237 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:15:21.340115 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:15:21.340496 systemd[1715]: Reached target sockets.target - Sockets. Aug 19 08:15:21.341149 systemd[1715]: Reached target basic.target - Basic System. Aug 19 08:15:21.341307 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:15:21.341587 systemd[1715]: Reached target default.target - Main User Target. Aug 19 08:15:21.341659 systemd[1715]: Startup finished in 433ms. Aug 19 08:15:21.357300 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:15:21.357871 google-clock-skew[1710]: INFO Starting Google Clock Skew daemon. Aug 19 08:15:21.367832 google-clock-skew[1710]: INFO Clock drift token has changed: 0. Aug 19 08:15:21.429504 google-networking[1711]: INFO Starting Google Networking daemon. Aug 19 08:15:21.446535 groupadd[1731]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 19 08:15:21.450290 groupadd[1731]: group added to /etc/gshadow: name=google-sudoers Aug 19 08:15:21.000877 systemd-resolved[1381]: Clock change detected. Flushing caches. Aug 19 08:15:21.018451 systemd-journald[1165]: Time jumped backwards, rotating. Aug 19 08:15:21.008882 google-clock-skew[1710]: INFO Synced system time with hardware clock. Aug 19 08:15:21.064967 groupadd[1731]: new group: name=google-sudoers, GID=1000 Aug 19 08:15:21.119713 google-accounts[1709]: INFO Starting Google Accounts daemon. Aug 19 08:15:21.135896 google-accounts[1709]: WARNING OS Login not installed. Aug 19 08:15:21.139295 google-accounts[1709]: INFO Creating a new user account for 0. Aug 19 08:15:21.141484 systemd[1]: Started sshd@1-10.128.0.35:22-147.75.109.163:44598.service - OpenSSH per-connection server daemon (147.75.109.163:44598). Aug 19 08:15:21.149085 init.sh[1743]: useradd: invalid user name '0': use --badname to ignore Aug 19 08:15:21.149522 google-accounts[1709]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 19 08:15:21.297855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:21.308952 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:15:21.314380 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:15:21.318848 systemd[1]: Startup finished in 4.273s (kernel) + 7.463s (initrd) + 8.540s (userspace) = 20.278s. Aug 19 08:15:21.454232 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 44598 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:21.456247 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:21.465347 systemd-logind[1505]: New session 2 of user core. Aug 19 08:15:21.471065 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:15:21.606437 ntpd[1496]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:23%2]:123 Aug 19 08:15:21.606956 ntpd[1496]: 19 Aug 08:15:21 ntpd[1496]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:23%2]:123 Aug 19 08:15:21.669038 sshd[1758]: Connection closed by 147.75.109.163 port 44598 Aug 19 08:15:21.670847 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:21.678539 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. Aug 19 08:15:21.678748 systemd[1]: sshd@1-10.128.0.35:22-147.75.109.163:44598.service: Deactivated successfully. Aug 19 08:15:21.682288 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 08:15:21.686433 systemd-logind[1505]: Removed session 2. Aug 19 08:15:21.728160 systemd[1]: Started sshd@2-10.128.0.35:22-147.75.109.163:44614.service - OpenSSH per-connection server daemon (147.75.109.163:44614). Aug 19 08:15:22.048293 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 44614 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:22.050090 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:22.061483 systemd-logind[1505]: New session 3 of user core. Aug 19 08:15:22.068077 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:15:22.258232 sshd[1771]: Connection closed by 147.75.109.163 port 44614 Aug 19 08:15:22.259094 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:22.266586 systemd[1]: sshd@2-10.128.0.35:22-147.75.109.163:44614.service: Deactivated successfully. Aug 19 08:15:22.269835 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 08:15:22.271421 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. Aug 19 08:15:22.274362 systemd-logind[1505]: Removed session 3. Aug 19 08:15:22.297876 kubelet[1753]: E0819 08:15:22.297753 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:15:22.314751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:15:22.315037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:15:22.315572 systemd[1]: kubelet.service: Consumed 1.333s CPU time, 268.7M memory peak. Aug 19 08:15:22.320247 systemd[1]: Started sshd@3-10.128.0.35:22-147.75.109.163:44630.service - OpenSSH per-connection server daemon (147.75.109.163:44630). Aug 19 08:15:22.631469 sshd[1779]: Accepted publickey for core from 147.75.109.163 port 44630 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:22.633666 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:22.641818 systemd-logind[1505]: New session 4 of user core. Aug 19 08:15:22.649995 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:15:22.848868 sshd[1782]: Connection closed by 147.75.109.163 port 44630 Aug 19 08:15:22.849898 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:22.856605 systemd[1]: sshd@3-10.128.0.35:22-147.75.109.163:44630.service: Deactivated successfully. Aug 19 08:15:22.859507 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:15:22.860908 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:15:22.862948 systemd-logind[1505]: Removed session 4. Aug 19 08:15:22.901724 systemd[1]: Started sshd@4-10.128.0.35:22-147.75.109.163:44646.service - OpenSSH per-connection server daemon (147.75.109.163:44646). Aug 19 08:15:23.214728 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 44646 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:23.216696 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:23.224814 systemd-logind[1505]: New session 5 of user core. Aug 19 08:15:23.232038 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:15:23.413823 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:15:23.414354 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:23.430811 sudo[1792]: pam_unix(sudo:session): session closed for user root Aug 19 08:15:23.474807 sshd[1791]: Connection closed by 147.75.109.163 port 44646 Aug 19 08:15:23.476686 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:23.484540 systemd[1]: sshd@4-10.128.0.35:22-147.75.109.163:44646.service: Deactivated successfully. Aug 19 08:15:23.487442 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:15:23.488929 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:15:23.491310 systemd-logind[1505]: Removed session 5. Aug 19 08:15:23.532203 systemd[1]: Started sshd@5-10.128.0.35:22-147.75.109.163:44660.service - OpenSSH per-connection server daemon (147.75.109.163:44660). Aug 19 08:15:23.856354 sshd[1798]: Accepted publickey for core from 147.75.109.163 port 44660 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:23.858413 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:23.866765 systemd-logind[1505]: New session 6 of user core. Aug 19 08:15:23.873998 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:15:24.039757 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:15:24.040279 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:24.048603 sudo[1803]: pam_unix(sudo:session): session closed for user root Aug 19 08:15:24.064730 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:15:24.065262 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:24.080074 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:15:24.137212 augenrules[1825]: No rules Aug 19 08:15:24.138880 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:15:24.139277 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:15:24.140877 sudo[1802]: pam_unix(sudo:session): session closed for user root Aug 19 08:15:24.184630 sshd[1801]: Connection closed by 147.75.109.163 port 44660 Aug 19 08:15:24.185663 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:24.193090 systemd[1]: sshd@5-10.128.0.35:22-147.75.109.163:44660.service: Deactivated successfully. Aug 19 08:15:24.195817 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:15:24.197304 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:15:24.199281 systemd-logind[1505]: Removed session 6. Aug 19 08:15:24.241101 systemd[1]: Started sshd@6-10.128.0.35:22-147.75.109.163:44676.service - OpenSSH per-connection server daemon (147.75.109.163:44676). Aug 19 08:15:24.550053 sshd[1834]: Accepted publickey for core from 147.75.109.163 port 44676 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:15:24.552163 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:24.559814 systemd-logind[1505]: New session 7 of user core. Aug 19 08:15:24.566988 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:15:24.732148 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:15:24.732672 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:25.211166 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:15:25.241685 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:15:25.588461 dockerd[1855]: time="2025-08-19T08:15:25.588353109Z" level=info msg="Starting up" Aug 19 08:15:25.594330 dockerd[1855]: time="2025-08-19T08:15:25.594279660Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:15:25.613133 dockerd[1855]: time="2025-08-19T08:15:25.613052352Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:15:25.745628 dockerd[1855]: time="2025-08-19T08:15:25.745553364Z" level=info msg="Loading containers: start." Aug 19 08:15:25.764784 kernel: Initializing XFRM netlink socket Aug 19 08:15:26.126573 systemd-networkd[1448]: docker0: Link UP Aug 19 08:15:26.132577 dockerd[1855]: time="2025-08-19T08:15:26.132510924Z" level=info msg="Loading containers: done." Aug 19 08:15:26.154131 dockerd[1855]: time="2025-08-19T08:15:26.153499498Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:15:26.154131 dockerd[1855]: time="2025-08-19T08:15:26.153648472Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:15:26.154131 dockerd[1855]: time="2025-08-19T08:15:26.153835439Z" level=info msg="Initializing buildkit" Aug 19 08:15:26.189351 dockerd[1855]: time="2025-08-19T08:15:26.189274243Z" level=info msg="Completed buildkit initialization" Aug 19 08:15:26.200786 dockerd[1855]: time="2025-08-19T08:15:26.200679363Z" level=info msg="Daemon has completed initialization" Aug 19 08:15:26.200994 dockerd[1855]: time="2025-08-19T08:15:26.200797941Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:15:26.201468 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:15:27.139649 containerd[1576]: time="2025-08-19T08:15:27.139571801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Aug 19 08:15:27.832724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514626461.mount: Deactivated successfully. Aug 19 08:15:29.494448 containerd[1576]: time="2025-08-19T08:15:29.494364283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:29.495850 containerd[1576]: time="2025-08-19T08:15:29.495793665Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28086259" Aug 19 08:15:29.497308 containerd[1576]: time="2025-08-19T08:15:29.497236918Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:29.501515 containerd[1576]: time="2025-08-19T08:15:29.501408629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:29.503089 containerd[1576]: time="2025-08-19T08:15:29.502995055Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.363363237s" Aug 19 08:15:29.503089 containerd[1576]: time="2025-08-19T08:15:29.503051802Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Aug 19 08:15:29.504450 containerd[1576]: time="2025-08-19T08:15:29.504407619Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Aug 19 08:15:30.939236 containerd[1576]: time="2025-08-19T08:15:30.939160427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:30.940547 containerd[1576]: time="2025-08-19T08:15:30.940488604Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24716615" Aug 19 08:15:30.942035 containerd[1576]: time="2025-08-19T08:15:30.941969905Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:30.945065 containerd[1576]: time="2025-08-19T08:15:30.944998787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:30.946541 containerd[1576]: time="2025-08-19T08:15:30.946310934Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.441865688s" Aug 19 08:15:30.946541 containerd[1576]: time="2025-08-19T08:15:30.946354796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Aug 19 08:15:30.947410 containerd[1576]: time="2025-08-19T08:15:30.947187254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Aug 19 08:15:32.216193 containerd[1576]: time="2025-08-19T08:15:32.216113927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:32.217596 containerd[1576]: time="2025-08-19T08:15:32.217537613Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18784343" Aug 19 08:15:32.219415 containerd[1576]: time="2025-08-19T08:15:32.219347034Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:32.222710 containerd[1576]: time="2025-08-19T08:15:32.222642524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:32.224221 containerd[1576]: time="2025-08-19T08:15:32.223990084Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.276526243s" Aug 19 08:15:32.224221 containerd[1576]: time="2025-08-19T08:15:32.224037891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Aug 19 08:15:32.224953 containerd[1576]: time="2025-08-19T08:15:32.224922205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Aug 19 08:15:32.552999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:15:32.555429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:33.006342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:33.021422 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:15:33.101526 kubelet[2134]: E0819 08:15:33.101450 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:15:33.110223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:15:33.110506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:15:33.111247 systemd[1]: kubelet.service: Consumed 244ms CPU time, 110.2M memory peak. Aug 19 08:15:33.627134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704539322.mount: Deactivated successfully. Aug 19 08:15:34.291116 containerd[1576]: time="2025-08-19T08:15:34.291023657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:34.292513 containerd[1576]: time="2025-08-19T08:15:34.292442660Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30386150" Aug 19 08:15:34.294198 containerd[1576]: time="2025-08-19T08:15:34.294105651Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:34.296914 containerd[1576]: time="2025-08-19T08:15:34.296844872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:34.297886 containerd[1576]: time="2025-08-19T08:15:34.297839953Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.072754947s" Aug 19 08:15:34.298045 containerd[1576]: time="2025-08-19T08:15:34.297905756Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Aug 19 08:15:34.298930 containerd[1576]: time="2025-08-19T08:15:34.298903739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 08:15:34.766433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876042970.mount: Deactivated successfully. Aug 19 08:15:36.003780 containerd[1576]: time="2025-08-19T08:15:36.003683517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:36.008300 containerd[1576]: time="2025-08-19T08:15:36.008223222Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Aug 19 08:15:36.012548 containerd[1576]: time="2025-08-19T08:15:36.011930563Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:36.016171 containerd[1576]: time="2025-08-19T08:15:36.016122302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:36.017524 containerd[1576]: time="2025-08-19T08:15:36.017479862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.718467721s" Aug 19 08:15:36.017703 containerd[1576]: time="2025-08-19T08:15:36.017678362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 19 08:15:36.018486 containerd[1576]: time="2025-08-19T08:15:36.018449754Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:15:36.536358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64567839.mount: Deactivated successfully. Aug 19 08:15:36.544458 containerd[1576]: time="2025-08-19T08:15:36.544371890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:15:36.545721 containerd[1576]: time="2025-08-19T08:15:36.545634971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Aug 19 08:15:36.547450 containerd[1576]: time="2025-08-19T08:15:36.547371802Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:15:36.550884 containerd[1576]: time="2025-08-19T08:15:36.550824195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:15:36.552432 containerd[1576]: time="2025-08-19T08:15:36.551869454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.370297ms" Aug 19 08:15:36.552432 containerd[1576]: time="2025-08-19T08:15:36.551913291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:15:36.552596 containerd[1576]: time="2025-08-19T08:15:36.552567302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 19 08:15:36.919627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199666514.mount: Deactivated successfully. Aug 19 08:15:39.249797 containerd[1576]: time="2025-08-19T08:15:39.249697259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:39.251529 containerd[1576]: time="2025-08-19T08:15:39.251481962Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56918218" Aug 19 08:15:39.253386 containerd[1576]: time="2025-08-19T08:15:39.253296286Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:39.258426 containerd[1576]: time="2025-08-19T08:15:39.258335114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:39.260088 containerd[1576]: time="2025-08-19T08:15:39.259862352Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.707245766s" Aug 19 08:15:39.260088 containerd[1576]: time="2025-08-19T08:15:39.259921931Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 19 08:15:42.955037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:42.955333 systemd[1]: kubelet.service: Consumed 244ms CPU time, 110.2M memory peak. Aug 19 08:15:42.958809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:43.000862 systemd[1]: Reload requested from client PID 2289 ('systemctl') (unit session-7.scope)... Aug 19 08:15:43.000889 systemd[1]: Reloading... Aug 19 08:15:43.158008 zram_generator::config[2330]: No configuration found. Aug 19 08:15:43.534230 systemd[1]: Reloading finished in 532 ms. Aug 19 08:15:43.620030 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:15:43.620181 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:15:43.620573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:43.620656 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.3M memory peak. Aug 19 08:15:43.623851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:44.189848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:44.205533 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:15:44.263061 kubelet[2385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:15:44.263061 kubelet[2385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 08:15:44.263061 kubelet[2385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:15:44.266496 kubelet[2385]: I0819 08:15:44.263171 2385 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:15:44.833890 kubelet[2385]: I0819 08:15:44.833804 2385 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 08:15:44.833890 kubelet[2385]: I0819 08:15:44.833843 2385 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:15:44.834310 kubelet[2385]: I0819 08:15:44.834271 2385 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 08:15:44.880980 kubelet[2385]: E0819 08:15:44.880880 2385 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:44.882836 kubelet[2385]: I0819 08:15:44.882617 2385 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:15:44.897099 kubelet[2385]: I0819 08:15:44.897051 2385 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:15:44.904970 kubelet[2385]: I0819 08:15:44.904931 2385 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:15:44.905378 kubelet[2385]: I0819 08:15:44.905356 2385 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 08:15:44.905752 kubelet[2385]: I0819 08:15:44.905675 2385 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:15:44.906300 kubelet[2385]: I0819 08:15:44.905864 2385 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:15:44.906593 kubelet[2385]: I0819 08:15:44.906573 2385 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:15:44.906679 kubelet[2385]: I0819 08:15:44.906668 2385 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 08:15:44.906947 kubelet[2385]: I0819 08:15:44.906928 2385 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:15:44.913661 kubelet[2385]: I0819 08:15:44.913600 2385 kubelet.go:408] "Attempting to sync node with API server" Aug 19 08:15:44.913661 kubelet[2385]: I0819 08:15:44.913661 2385 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:15:44.913851 kubelet[2385]: I0819 08:15:44.913728 2385 kubelet.go:314] "Adding apiserver pod source" Aug 19 08:15:44.913851 kubelet[2385]: I0819 08:15:44.913776 2385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:15:44.921773 kubelet[2385]: W0819 08:15:44.919694 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Aug 19 08:15:44.921773 kubelet[2385]: E0819 08:15:44.919840 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:44.922022 kubelet[2385]: I0819 08:15:44.921996 2385 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:15:44.922799 kubelet[2385]: I0819 08:15:44.922767 2385 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:15:44.922887 kubelet[2385]: W0819 08:15:44.922876 2385 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:15:44.926212 kubelet[2385]: I0819 08:15:44.925944 2385 server.go:1274] "Started kubelet" Aug 19 08:15:44.929691 kubelet[2385]: W0819 08:15:44.928975 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Aug 19 08:15:44.929691 kubelet[2385]: E0819 08:15:44.929064 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:44.929691 kubelet[2385]: I0819 08:15:44.929132 2385 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:15:44.932355 kubelet[2385]: I0819 08:15:44.932306 2385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:15:44.938070 kubelet[2385]: I0819 08:15:44.937997 2385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:15:44.938581 kubelet[2385]: I0819 08:15:44.938545 2385 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:15:44.942890 kubelet[2385]: I0819 08:15:44.942859 2385 server.go:449] "Adding debug handlers to kubelet server" Aug 19 08:15:44.945447 kubelet[2385]: E0819 08:15:44.942547 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal.185d1d06d50ecb74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,UID:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,},FirstTimestamp:2025-08-19 08:15:44.925887348 +0000 UTC m=+0.714274700,LastTimestamp:2025-08-19 08:15:44.925887348 +0000 UTC m=+0.714274700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,}" Aug 19 08:15:44.947311 kubelet[2385]: I0819 08:15:44.945975 2385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:15:44.947560 kubelet[2385]: I0819 08:15:44.947541 2385 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 08:15:44.948144 kubelet[2385]: E0819 08:15:44.948118 2385 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" not found" Aug 19 08:15:44.949960 kubelet[2385]: E0819 08:15:44.949910 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="200ms" Aug 19 08:15:44.951348 kubelet[2385]: I0819 08:15:44.950080 2385 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 08:15:44.951348 kubelet[2385]: I0819 08:15:44.950270 2385 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:15:44.951348 kubelet[2385]: I0819 08:15:44.951191 2385 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:15:44.951560 kubelet[2385]: I0819 08:15:44.951456 2385 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:15:44.952245 kubelet[2385]: W0819 08:15:44.952020 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Aug 19 08:15:44.952469 kubelet[2385]: E0819 08:15:44.952218 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:44.953722 kubelet[2385]: E0819 08:15:44.953681 2385 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:15:44.956875 kubelet[2385]: I0819 08:15:44.956844 2385 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:15:44.980284 kubelet[2385]: I0819 08:15:44.980209 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:15:44.989689 kubelet[2385]: I0819 08:15:44.989628 2385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:15:44.989689 kubelet[2385]: I0819 08:15:44.989674 2385 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 08:15:44.989896 kubelet[2385]: I0819 08:15:44.989712 2385 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 08:15:44.989896 kubelet[2385]: E0819 08:15:44.989802 2385 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:15:44.997359 kubelet[2385]: W0819 08:15:44.997049 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Aug 19 08:15:44.997359 kubelet[2385]: E0819 08:15:44.997148 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:44.999931 kubelet[2385]: I0819 08:15:44.999808 2385 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 08:15:44.999931 kubelet[2385]: I0819 08:15:44.999833 2385 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 08:15:44.999931 kubelet[2385]: I0819 08:15:44.999859 2385 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:15:45.003263 kubelet[2385]: I0819 08:15:45.003224 2385 policy_none.go:49] "None policy: Start" Aug 19 08:15:45.004054 kubelet[2385]: I0819 08:15:45.004021 2385 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 08:15:45.004054 kubelet[2385]: I0819 08:15:45.004055 2385 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:15:45.014606 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:15:45.028808 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:15:45.034830 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:15:45.046999 kubelet[2385]: I0819 08:15:45.046961 2385 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:15:45.047879 kubelet[2385]: I0819 08:15:45.047857 2385 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:15:45.048158 kubelet[2385]: I0819 08:15:45.048103 2385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:15:45.048720 kubelet[2385]: I0819 08:15:45.048697 2385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:15:45.052526 kubelet[2385]: E0819 08:15:45.052491 2385 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" not found" Aug 19 08:15:45.119396 systemd[1]: Created slice kubepods-burstable-pod1a76709f17d2d40cfeb3a8e9ff1efaef.slice - libcontainer container kubepods-burstable-pod1a76709f17d2d40cfeb3a8e9ff1efaef.slice. Aug 19 08:15:45.136342 systemd[1]: Created slice kubepods-burstable-pod1eb3a0991ed0fb60c17b325e4855e5a3.slice - libcontainer container kubepods-burstable-pod1eb3a0991ed0fb60c17b325e4855e5a3.slice. Aug 19 08:15:45.150352 systemd[1]: Created slice kubepods-burstable-pod020e8048087c8c7d3f3fa4ee984daed0.slice - libcontainer container kubepods-burstable-pod020e8048087c8c7d3f3fa4ee984daed0.slice. Aug 19 08:15:45.153111 kubelet[2385]: E0819 08:15:45.153003 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="400ms" Aug 19 08:15:45.157048 kubelet[2385]: I0819 08:15:45.156989 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.157595 kubelet[2385]: E0819 08:15:45.157531 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.252624 kubelet[2385]: I0819 08:15:45.252541 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-k8s-certs\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.252624 kubelet[2385]: I0819 08:15:45.252619 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-kubeconfig\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.252952 kubelet[2385]: I0819 08:15:45.252651 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1eb3a0991ed0fb60c17b325e4855e5a3-kubeconfig\") pod \"kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1eb3a0991ed0fb60c17b325e4855e5a3\") " pod="kube-system/kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.252952 kubelet[2385]: I0819 08:15:45.252678 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/020e8048087c8c7d3f3fa4ee984daed0-k8s-certs\") pod \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"020e8048087c8c7d3f3fa4ee984daed0\") " pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.252952 kubelet[2385]: I0819 08:15:45.252708 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-ca-certs\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.252952 kubelet[2385]: I0819 08:15:45.252778 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-flexvolume-dir\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.253127 kubelet[2385]: I0819 08:15:45.252813 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.253127 kubelet[2385]: I0819 08:15:45.252841 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/020e8048087c8c7d3f3fa4ee984daed0-ca-certs\") pod \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"020e8048087c8c7d3f3fa4ee984daed0\") " pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.253127 kubelet[2385]: I0819 08:15:45.252872 2385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/020e8048087c8c7d3f3fa4ee984daed0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"020e8048087c8c7d3f3fa4ee984daed0\") " pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.364271 kubelet[2385]: I0819 08:15:45.364182 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.365023 kubelet[2385]: E0819 08:15:45.364788 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.433024 containerd[1576]: time="2025-08-19T08:15:45.432848318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,Uid:1a76709f17d2d40cfeb3a8e9ff1efaef,Namespace:kube-system,Attempt:0,}" Aug 19 08:15:45.447868 containerd[1576]: time="2025-08-19T08:15:45.447797465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,Uid:1eb3a0991ed0fb60c17b325e4855e5a3,Namespace:kube-system,Attempt:0,}" Aug 19 08:15:45.470828 containerd[1576]: time="2025-08-19T08:15:45.470413126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,Uid:020e8048087c8c7d3f3fa4ee984daed0,Namespace:kube-system,Attempt:0,}" Aug 19 08:15:45.473052 containerd[1576]: time="2025-08-19T08:15:45.472984628Z" level=info msg="connecting to shim 66a4c7d13c0c2515f64c3b77a860f601b9d3e67a450dc37a990dfaea4dc05eea" address="unix:///run/containerd/s/d57a802eeadddf98768ee6b92334f4ec4f8dcc7d8e4d1ba66518fd4dd2843107" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:15:45.528086 systemd[1]: Started cri-containerd-66a4c7d13c0c2515f64c3b77a860f601b9d3e67a450dc37a990dfaea4dc05eea.scope - libcontainer container 66a4c7d13c0c2515f64c3b77a860f601b9d3e67a450dc37a990dfaea4dc05eea. Aug 19 08:15:45.538991 containerd[1576]: time="2025-08-19T08:15:45.538823668Z" level=info msg="connecting to shim 46c9a31560701f4830eb247fa88357dee62092f05109db7b2b25cbf09f3347e8" address="unix:///run/containerd/s/2e41bd1349de3abbd2039aa0e83bab330ff2929bf5c7a5f73d22974d84b18e08" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:15:45.550891 kubelet[2385]: E0819 08:15:45.547057 2385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal.185d1d06d50ecb74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,UID:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,},FirstTimestamp:2025-08-19 08:15:44.925887348 +0000 UTC m=+0.714274700,LastTimestamp:2025-08-19 08:15:44.925887348 +0000 UTC m=+0.714274700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,}" Aug 19 08:15:45.558819 kubelet[2385]: E0819 08:15:45.556988 2385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.35:6443: connect: connection refused" interval="800ms" Aug 19 08:15:45.589057 containerd[1576]: time="2025-08-19T08:15:45.588684787Z" level=info msg="connecting to shim 6fe2bb2d99b9335ca1a04422694c4303e6e10fce2197e52e1fad273d0da64061" address="unix:///run/containerd/s/723610470ce29a2a21f3be108be0a6144c9dbae392dbb63a82321cbeced27bfc" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:15:45.617246 systemd[1]: Started cri-containerd-46c9a31560701f4830eb247fa88357dee62092f05109db7b2b25cbf09f3347e8.scope - libcontainer container 46c9a31560701f4830eb247fa88357dee62092f05109db7b2b25cbf09f3347e8. Aug 19 08:15:45.669306 systemd[1]: Started cri-containerd-6fe2bb2d99b9335ca1a04422694c4303e6e10fce2197e52e1fad273d0da64061.scope - libcontainer container 6fe2bb2d99b9335ca1a04422694c4303e6e10fce2197e52e1fad273d0da64061. Aug 19 08:15:45.683018 containerd[1576]: time="2025-08-19T08:15:45.682952725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,Uid:1a76709f17d2d40cfeb3a8e9ff1efaef,Namespace:kube-system,Attempt:0,} returns sandbox id \"66a4c7d13c0c2515f64c3b77a860f601b9d3e67a450dc37a990dfaea4dc05eea\"" Aug 19 08:15:45.689669 kubelet[2385]: E0819 08:15:45.688394 2385 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flat" Aug 19 08:15:45.696795 containerd[1576]: time="2025-08-19T08:15:45.695661298Z" level=info msg="CreateContainer within sandbox \"66a4c7d13c0c2515f64c3b77a860f601b9d3e67a450dc37a990dfaea4dc05eea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:15:45.716574 containerd[1576]: time="2025-08-19T08:15:45.715928473Z" level=info msg="Container 8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:15:45.730221 containerd[1576]: time="2025-08-19T08:15:45.730145224Z" level=info msg="CreateContainer within sandbox \"66a4c7d13c0c2515f64c3b77a860f601b9d3e67a450dc37a990dfaea4dc05eea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b\"" Aug 19 08:15:45.733766 containerd[1576]: time="2025-08-19T08:15:45.732632952Z" level=info msg="StartContainer for \"8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b\"" Aug 19 08:15:45.737229 containerd[1576]: time="2025-08-19T08:15:45.737052730Z" level=info msg="connecting to shim 8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b" address="unix:///run/containerd/s/d57a802eeadddf98768ee6b92334f4ec4f8dcc7d8e4d1ba66518fd4dd2843107" protocol=ttrpc version=3 Aug 19 08:15:45.774784 kubelet[2385]: I0819 08:15:45.774046 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.775223 kubelet[2385]: E0819 08:15:45.775171 2385 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.35:6443/api/v1/nodes\": dial tcp 10.128.0.35:6443: connect: connection refused" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:45.791271 systemd[1]: Started cri-containerd-8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b.scope - libcontainer container 8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b. Aug 19 08:15:45.807681 containerd[1576]: time="2025-08-19T08:15:45.807607095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,Uid:1eb3a0991ed0fb60c17b325e4855e5a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c9a31560701f4830eb247fa88357dee62092f05109db7b2b25cbf09f3347e8\"" Aug 19 08:15:45.811550 kubelet[2385]: E0819 08:15:45.811052 2385 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-21291" Aug 19 08:15:45.813351 containerd[1576]: time="2025-08-19T08:15:45.813305957Z" level=info msg="CreateContainer within sandbox \"46c9a31560701f4830eb247fa88357dee62092f05109db7b2b25cbf09f3347e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:15:45.817434 containerd[1576]: time="2025-08-19T08:15:45.817336876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal,Uid:020e8048087c8c7d3f3fa4ee984daed0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fe2bb2d99b9335ca1a04422694c4303e6e10fce2197e52e1fad273d0da64061\"" Aug 19 08:15:45.820320 kubelet[2385]: E0819 08:15:45.820239 2385 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-21291" Aug 19 08:15:45.823794 containerd[1576]: time="2025-08-19T08:15:45.823646873Z" level=info msg="CreateContainer within sandbox \"6fe2bb2d99b9335ca1a04422694c4303e6e10fce2197e52e1fad273d0da64061\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:15:45.832991 containerd[1576]: time="2025-08-19T08:15:45.832470194Z" level=info msg="Container 9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:15:45.854775 containerd[1576]: time="2025-08-19T08:15:45.853850969Z" level=info msg="Container ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:15:45.855516 containerd[1576]: time="2025-08-19T08:15:45.855297621Z" level=info msg="CreateContainer within sandbox \"46c9a31560701f4830eb247fa88357dee62092f05109db7b2b25cbf09f3347e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81\"" Aug 19 08:15:45.857778 containerd[1576]: time="2025-08-19T08:15:45.857342240Z" level=info msg="StartContainer for \"9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81\"" Aug 19 08:15:45.859582 containerd[1576]: time="2025-08-19T08:15:45.859475394Z" level=info msg="connecting to shim 9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81" address="unix:///run/containerd/s/2e41bd1349de3abbd2039aa0e83bab330ff2929bf5c7a5f73d22974d84b18e08" protocol=ttrpc version=3 Aug 19 08:15:45.870607 containerd[1576]: time="2025-08-19T08:15:45.870486705Z" level=info msg="CreateContainer within sandbox \"6fe2bb2d99b9335ca1a04422694c4303e6e10fce2197e52e1fad273d0da64061\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4\"" Aug 19 08:15:45.872412 containerd[1576]: time="2025-08-19T08:15:45.872363833Z" level=info msg="StartContainer for \"ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4\"" Aug 19 08:15:45.876754 containerd[1576]: time="2025-08-19T08:15:45.876619793Z" level=info msg="connecting to shim ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4" address="unix:///run/containerd/s/723610470ce29a2a21f3be108be0a6144c9dbae392dbb63a82321cbeced27bfc" protocol=ttrpc version=3 Aug 19 08:15:45.912520 systemd[1]: Started cri-containerd-9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81.scope - libcontainer container 9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81. Aug 19 08:15:45.933166 systemd[1]: Started cri-containerd-ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4.scope - libcontainer container ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4. Aug 19 08:15:45.954334 containerd[1576]: time="2025-08-19T08:15:45.953286871Z" level=info msg="StartContainer for \"8a232206d03c9defa84fcf9944d99d8d026db1d025aaa0fc5c9a3df715bce06b\" returns successfully" Aug 19 08:15:46.065512 kubelet[2385]: W0819 08:15:46.065399 2385 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.35:6443: connect: connection refused Aug 19 08:15:46.065706 kubelet[2385]: E0819 08:15:46.065527 2385 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.35:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:46.084779 containerd[1576]: time="2025-08-19T08:15:46.084677275Z" level=info msg="StartContainer for \"ea70e2630859df2e2f7a253c178472ed5629394b5ed6eaeb546105337563fda4\" returns successfully" Aug 19 08:15:46.148932 containerd[1576]: time="2025-08-19T08:15:46.148871863Z" level=info msg="StartContainer for \"9b651f9ca868e4157297ab764d79f4b19e76fe47d6ee628498df5c8e4a35ff81\" returns successfully" Aug 19 08:15:46.605485 kubelet[2385]: I0819 08:15:46.605424 2385 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:49.180523 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 19 08:15:49.468900 kubelet[2385]: I0819 08:15:49.467324 2385 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:49.468900 kubelet[2385]: E0819 08:15:49.467387 2385 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\": node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" not found" Aug 19 08:15:49.923780 kubelet[2385]: I0819 08:15:49.923649 2385 apiserver.go:52] "Watching apiserver" Aug 19 08:15:49.952216 kubelet[2385]: I0819 08:15:49.952117 2385 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 08:15:51.184198 systemd[1]: Reload requested from client PID 2663 ('systemctl') (unit session-7.scope)... Aug 19 08:15:51.184223 systemd[1]: Reloading... Aug 19 08:15:51.346815 zram_generator::config[2703]: No configuration found. Aug 19 08:15:51.692493 systemd[1]: Reloading finished in 507 ms. Aug 19 08:15:51.735261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:51.751587 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:15:51.752038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:51.752160 systemd[1]: kubelet.service: Consumed 1.272s CPU time, 128.2M memory peak. Aug 19 08:15:51.755676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:52.105808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:52.119472 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:15:52.208873 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:15:52.208873 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 08:15:52.208873 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:15:52.211237 kubelet[2755]: I0819 08:15:52.209821 2755 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:15:52.231508 kubelet[2755]: I0819 08:15:52.227700 2755 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 08:15:52.231508 kubelet[2755]: I0819 08:15:52.227770 2755 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:15:52.231508 kubelet[2755]: I0819 08:15:52.228541 2755 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 08:15:52.233053 kubelet[2755]: I0819 08:15:52.233017 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 08:15:52.237698 kubelet[2755]: I0819 08:15:52.237656 2755 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:15:52.246304 kubelet[2755]: I0819 08:15:52.246256 2755 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:15:52.250731 kubelet[2755]: I0819 08:15:52.250675 2755 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:15:52.250939 kubelet[2755]: I0819 08:15:52.250894 2755 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 08:15:52.251205 kubelet[2755]: I0819 08:15:52.251131 2755 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:15:52.252591 kubelet[2755]: I0819 08:15:52.251202 2755 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:15:52.252591 kubelet[2755]: I0819 08:15:52.251524 2755 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:15:52.252591 kubelet[2755]: I0819 08:15:52.251544 2755 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 08:15:52.252591 kubelet[2755]: I0819 08:15:52.251589 2755 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:15:52.252928 kubelet[2755]: I0819 08:15:52.251786 2755 kubelet.go:408] "Attempting to sync node with API server" Aug 19 08:15:52.252928 kubelet[2755]: I0819 08:15:52.251808 2755 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:15:52.252928 kubelet[2755]: I0819 08:15:52.251860 2755 kubelet.go:314] "Adding apiserver pod source" Aug 19 08:15:52.252928 kubelet[2755]: I0819 08:15:52.251877 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:15:52.254181 kubelet[2755]: I0819 08:15:52.254147 2755 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:15:52.255203 kubelet[2755]: I0819 08:15:52.255109 2755 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:15:52.260767 kubelet[2755]: I0819 08:15:52.257350 2755 server.go:1274] "Started kubelet" Aug 19 08:15:52.266767 kubelet[2755]: I0819 08:15:52.265526 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:15:52.266767 kubelet[2755]: I0819 08:15:52.266015 2755 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:15:52.266767 kubelet[2755]: I0819 08:15:52.266111 2755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:15:52.269147 kubelet[2755]: I0819 08:15:52.269113 2755 server.go:449] "Adding debug handlers to kubelet server" Aug 19 08:15:52.274885 kubelet[2755]: I0819 08:15:52.272598 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:15:52.292707 kubelet[2755]: I0819 08:15:52.292546 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:15:52.300573 kubelet[2755]: I0819 08:15:52.300529 2755 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 08:15:52.301279 kubelet[2755]: E0819 08:15:52.301247 2755 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" not found" Aug 19 08:15:52.315573 kubelet[2755]: I0819 08:15:52.315522 2755 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 08:15:52.329881 kubelet[2755]: I0819 08:15:52.329576 2755 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:15:52.333864 kubelet[2755]: I0819 08:15:52.331785 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:15:52.338907 kubelet[2755]: I0819 08:15:52.338842 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:15:52.340096 kubelet[2755]: I0819 08:15:52.340057 2755 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 08:15:52.340352 kubelet[2755]: I0819 08:15:52.340264 2755 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 08:15:52.340563 kubelet[2755]: E0819 08:15:52.340525 2755 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:15:52.346383 kubelet[2755]: I0819 08:15:52.346349 2755 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:15:52.346718 kubelet[2755]: I0819 08:15:52.346689 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:15:52.351241 kubelet[2755]: I0819 08:15:52.350180 2755 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:15:52.351490 kubelet[2755]: E0819 08:15:52.351398 2755 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:15:52.428971 kubelet[2755]: I0819 08:15:52.428398 2755 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 08:15:52.428971 kubelet[2755]: I0819 08:15:52.428510 2755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 08:15:52.428971 kubelet[2755]: I0819 08:15:52.428555 2755 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:15:52.428971 kubelet[2755]: I0819 08:15:52.428824 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:15:52.428971 kubelet[2755]: I0819 08:15:52.428843 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:15:52.428971 kubelet[2755]: I0819 08:15:52.428876 2755 policy_none.go:49] "None policy: Start" Aug 19 08:15:52.431422 kubelet[2755]: I0819 08:15:52.431389 2755 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 08:15:52.431520 kubelet[2755]: I0819 08:15:52.431429 2755 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:15:52.431881 kubelet[2755]: I0819 08:15:52.431674 2755 state_mem.go:75] "Updated machine memory state" Aug 19 08:15:52.441042 kubelet[2755]: E0819 08:15:52.441007 2755 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:15:52.447362 kubelet[2755]: I0819 08:15:52.446866 2755 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:15:52.447362 kubelet[2755]: I0819 08:15:52.447125 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:15:52.447362 kubelet[2755]: I0819 08:15:52.447143 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:15:52.447776 kubelet[2755]: I0819 08:15:52.447703 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:15:52.568080 kubelet[2755]: I0819 08:15:52.567834 2755 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.580123 kubelet[2755]: I0819 08:15:52.578905 2755 kubelet_node_status.go:111] "Node was previously registered" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.580123 kubelet[2755]: I0819 08:15:52.579127 2755 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.656216 kubelet[2755]: W0819 08:15:52.656056 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 19 08:15:52.658341 kubelet[2755]: W0819 08:15:52.656940 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 19 08:15:52.658992 kubelet[2755]: W0819 08:15:52.658963 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 19 08:15:52.733201 kubelet[2755]: I0819 08:15:52.732857 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/020e8048087c8c7d3f3fa4ee984daed0-k8s-certs\") pod \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"020e8048087c8c7d3f3fa4ee984daed0\") " pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733201 kubelet[2755]: I0819 08:15:52.732939 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-flexvolume-dir\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733201 kubelet[2755]: I0819 08:15:52.732991 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/020e8048087c8c7d3f3fa4ee984daed0-ca-certs\") pod \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"020e8048087c8c7d3f3fa4ee984daed0\") " pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733201 kubelet[2755]: I0819 08:15:52.733071 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-ca-certs\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733562 kubelet[2755]: I0819 08:15:52.733124 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-k8s-certs\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733562 kubelet[2755]: I0819 08:15:52.733162 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-kubeconfig\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733942 kubelet[2755]: I0819 08:15:52.733196 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a76709f17d2d40cfeb3a8e9ff1efaef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1a76709f17d2d40cfeb3a8e9ff1efaef\") " pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733942 kubelet[2755]: I0819 08:15:52.733818 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1eb3a0991ed0fb60c17b325e4855e5a3-kubeconfig\") pod \"kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"1eb3a0991ed0fb60c17b325e4855e5a3\") " pod="kube-system/kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:52.733942 kubelet[2755]: I0819 08:15:52.733855 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/020e8048087c8c7d3f3fa4ee984daed0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" (UID: \"020e8048087c8c7d3f3fa4ee984daed0\") " pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:53.253704 kubelet[2755]: I0819 08:15:53.253574 2755 apiserver.go:52] "Watching apiserver" Aug 19 08:15:53.316837 kubelet[2755]: I0819 08:15:53.316719 2755 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 08:15:53.401318 kubelet[2755]: W0819 08:15:53.401268 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 19 08:15:53.401522 kubelet[2755]: E0819 08:15:53.401376 2755 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:15:53.435391 kubelet[2755]: I0819 08:15:53.434857 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" podStartSLOduration=1.434829019 podStartE2EDuration="1.434829019s" podCreationTimestamp="2025-08-19 08:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:15:53.433797298 +0000 UTC m=+1.305645264" watchObservedRunningTime="2025-08-19 08:15:53.434829019 +0000 UTC m=+1.306676986" Aug 19 08:15:53.450453 kubelet[2755]: I0819 08:15:53.450345 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" podStartSLOduration=1.44995388 podStartE2EDuration="1.44995388s" podCreationTimestamp="2025-08-19 08:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:15:53.449185759 +0000 UTC m=+1.321033723" watchObservedRunningTime="2025-08-19 08:15:53.44995388 +0000 UTC m=+1.321801845" Aug 19 08:15:53.466686 kubelet[2755]: I0819 08:15:53.466439 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" podStartSLOduration=1.466410314 podStartE2EDuration="1.466410314s" podCreationTimestamp="2025-08-19 08:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:15:53.466138834 +0000 UTC m=+1.337986803" watchObservedRunningTime="2025-08-19 08:15:53.466410314 +0000 UTC m=+1.338258281" Aug 19 08:15:58.575489 kubelet[2755]: I0819 08:15:58.575434 2755 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:15:58.577198 containerd[1576]: time="2025-08-19T08:15:58.577137022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:15:58.578937 kubelet[2755]: I0819 08:15:58.578214 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:15:59.459884 systemd[1]: Created slice kubepods-besteffort-pod531c05dc_f081_4edf_943c_8bbd7866aa3d.slice - libcontainer container kubepods-besteffort-pod531c05dc_f081_4edf_943c_8bbd7866aa3d.slice. Aug 19 08:15:59.477615 kubelet[2755]: I0819 08:15:59.477212 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/531c05dc-f081-4edf-943c-8bbd7866aa3d-kube-proxy\") pod \"kube-proxy-kkh8w\" (UID: \"531c05dc-f081-4edf-943c-8bbd7866aa3d\") " pod="kube-system/kube-proxy-kkh8w" Aug 19 08:15:59.477615 kubelet[2755]: I0819 08:15:59.477271 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgvr\" (UniqueName: \"kubernetes.io/projected/531c05dc-f081-4edf-943c-8bbd7866aa3d-kube-api-access-rsgvr\") pod \"kube-proxy-kkh8w\" (UID: \"531c05dc-f081-4edf-943c-8bbd7866aa3d\") " pod="kube-system/kube-proxy-kkh8w" Aug 19 08:15:59.477615 kubelet[2755]: I0819 08:15:59.477305 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/531c05dc-f081-4edf-943c-8bbd7866aa3d-lib-modules\") pod \"kube-proxy-kkh8w\" (UID: \"531c05dc-f081-4edf-943c-8bbd7866aa3d\") " pod="kube-system/kube-proxy-kkh8w" Aug 19 08:15:59.477615 kubelet[2755]: I0819 08:15:59.477336 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/531c05dc-f081-4edf-943c-8bbd7866aa3d-xtables-lock\") pod \"kube-proxy-kkh8w\" (UID: \"531c05dc-f081-4edf-943c-8bbd7866aa3d\") " pod="kube-system/kube-proxy-kkh8w" Aug 19 08:15:59.590095 systemd[1]: Created slice kubepods-besteffort-pod89758a6e_68c1_45be_8224_cc32e8bbabdd.slice - libcontainer container kubepods-besteffort-pod89758a6e_68c1_45be_8224_cc32e8bbabdd.slice. Aug 19 08:15:59.678638 kubelet[2755]: I0819 08:15:59.678548 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2ks\" (UniqueName: \"kubernetes.io/projected/89758a6e-68c1-45be-8224-cc32e8bbabdd-kube-api-access-kc2ks\") pod \"tigera-operator-5bf8dfcb4-jq7zv\" (UID: \"89758a6e-68c1-45be-8224-cc32e8bbabdd\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jq7zv" Aug 19 08:15:59.678638 kubelet[2755]: I0819 08:15:59.678640 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89758a6e-68c1-45be-8224-cc32e8bbabdd-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-jq7zv\" (UID: \"89758a6e-68c1-45be-8224-cc32e8bbabdd\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-jq7zv" Aug 19 08:15:59.770558 containerd[1576]: time="2025-08-19T08:15:59.770387925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkh8w,Uid:531c05dc-f081-4edf-943c-8bbd7866aa3d,Namespace:kube-system,Attempt:0,}" Aug 19 08:15:59.817646 containerd[1576]: time="2025-08-19T08:15:59.817499170Z" level=info msg="connecting to shim 7c5aa6cc19a6d7d3324366b9291667669861d97c25e65b59ea364f7cb5cc6350" address="unix:///run/containerd/s/ecc7e2de0e60d0158ef7bfe51c656d31481917a0b3082aaa0e108f5c81f6f728" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:15:59.862039 systemd[1]: Started cri-containerd-7c5aa6cc19a6d7d3324366b9291667669861d97c25e65b59ea364f7cb5cc6350.scope - libcontainer container 7c5aa6cc19a6d7d3324366b9291667669861d97c25e65b59ea364f7cb5cc6350. Aug 19 08:15:59.898635 containerd[1576]: time="2025-08-19T08:15:59.898477023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkh8w,Uid:531c05dc-f081-4edf-943c-8bbd7866aa3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c5aa6cc19a6d7d3324366b9291667669861d97c25e65b59ea364f7cb5cc6350\"" Aug 19 08:15:59.899653 containerd[1576]: time="2025-08-19T08:15:59.899555940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jq7zv,Uid:89758a6e-68c1-45be-8224-cc32e8bbabdd,Namespace:tigera-operator,Attempt:0,}" Aug 19 08:15:59.903940 containerd[1576]: time="2025-08-19T08:15:59.903882140Z" level=info msg="CreateContainer within sandbox \"7c5aa6cc19a6d7d3324366b9291667669861d97c25e65b59ea364f7cb5cc6350\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:15:59.935768 containerd[1576]: time="2025-08-19T08:15:59.934776938Z" level=info msg="Container 436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:15:59.939777 containerd[1576]: time="2025-08-19T08:15:59.939700646Z" level=info msg="connecting to shim 8c417c5798aa07eb72884137fa12ba6db5a4a8df69333fc276129ac6c7808ea0" address="unix:///run/containerd/s/a0b7b7df3e804475c0edc656124c28db5a8ef4c221b95fd9d819b8bdb085c64c" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:15:59.947056 containerd[1576]: time="2025-08-19T08:15:59.947005253Z" level=info msg="CreateContainer within sandbox \"7c5aa6cc19a6d7d3324366b9291667669861d97c25e65b59ea364f7cb5cc6350\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a\"" Aug 19 08:15:59.949025 containerd[1576]: time="2025-08-19T08:15:59.948970850Z" level=info msg="StartContainer for \"436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a\"" Aug 19 08:15:59.951657 containerd[1576]: time="2025-08-19T08:15:59.951619036Z" level=info msg="connecting to shim 436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a" address="unix:///run/containerd/s/ecc7e2de0e60d0158ef7bfe51c656d31481917a0b3082aaa0e108f5c81f6f728" protocol=ttrpc version=3 Aug 19 08:15:59.987981 systemd[1]: Started cri-containerd-8c417c5798aa07eb72884137fa12ba6db5a4a8df69333fc276129ac6c7808ea0.scope - libcontainer container 8c417c5798aa07eb72884137fa12ba6db5a4a8df69333fc276129ac6c7808ea0. Aug 19 08:15:59.994397 systemd[1]: Started cri-containerd-436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a.scope - libcontainer container 436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a. Aug 19 08:16:00.088071 containerd[1576]: time="2025-08-19T08:16:00.087885551Z" level=info msg="StartContainer for \"436aa38e7889c1ed502a8a1c88f38b26328de4b9543b83a22e675780aa57dc1a\" returns successfully" Aug 19 08:16:00.101869 containerd[1576]: time="2025-08-19T08:16:00.101808344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-jq7zv,Uid:89758a6e-68c1-45be-8224-cc32e8bbabdd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c417c5798aa07eb72884137fa12ba6db5a4a8df69333fc276129ac6c7808ea0\"" Aug 19 08:16:00.105772 containerd[1576]: time="2025-08-19T08:16:00.105708137Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 19 08:16:01.450230 kubelet[2755]: I0819 08:16:01.450138 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkh8w" podStartSLOduration=2.450107339 podStartE2EDuration="2.450107339s" podCreationTimestamp="2025-08-19 08:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:00.429966994 +0000 UTC m=+8.301814970" watchObservedRunningTime="2025-08-19 08:16:01.450107339 +0000 UTC m=+9.321955303" Aug 19 08:16:02.035509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898211854.mount: Deactivated successfully. Aug 19 08:16:03.168717 containerd[1576]: time="2025-08-19T08:16:03.168631234Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:03.170313 containerd[1576]: time="2025-08-19T08:16:03.170262296Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 19 08:16:03.172030 containerd[1576]: time="2025-08-19T08:16:03.171920173Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:03.177140 containerd[1576]: time="2025-08-19T08:16:03.177056962Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:03.178683 containerd[1576]: time="2025-08-19T08:16:03.178015638Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.072032061s" Aug 19 08:16:03.178683 containerd[1576]: time="2025-08-19T08:16:03.178075008Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 19 08:16:03.183686 containerd[1576]: time="2025-08-19T08:16:03.183639118Z" level=info msg="CreateContainer within sandbox \"8c417c5798aa07eb72884137fa12ba6db5a4a8df69333fc276129ac6c7808ea0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 19 08:16:03.194806 containerd[1576]: time="2025-08-19T08:16:03.194723717Z" level=info msg="Container 58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:03.210579 containerd[1576]: time="2025-08-19T08:16:03.210506770Z" level=info msg="CreateContainer within sandbox \"8c417c5798aa07eb72884137fa12ba6db5a4a8df69333fc276129ac6c7808ea0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34\"" Aug 19 08:16:03.211388 containerd[1576]: time="2025-08-19T08:16:03.211344162Z" level=info msg="StartContainer for \"58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34\"" Aug 19 08:16:03.213208 containerd[1576]: time="2025-08-19T08:16:03.213161142Z" level=info msg="connecting to shim 58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34" address="unix:///run/containerd/s/a0b7b7df3e804475c0edc656124c28db5a8ef4c221b95fd9d819b8bdb085c64c" protocol=ttrpc version=3 Aug 19 08:16:03.249082 systemd[1]: Started cri-containerd-58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34.scope - libcontainer container 58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34. Aug 19 08:16:03.295879 containerd[1576]: time="2025-08-19T08:16:03.295799600Z" level=info msg="StartContainer for \"58e346d9f74e830cefe6704b64dd17c3027df0ac679cf200595b8cdf7ea47b34\" returns successfully" Aug 19 08:16:03.654715 kubelet[2755]: I0819 08:16:03.654624 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-jq7zv" podStartSLOduration=1.580208232 podStartE2EDuration="4.654592984s" podCreationTimestamp="2025-08-19 08:15:59 +0000 UTC" firstStartedPulling="2025-08-19 08:16:00.105157095 +0000 UTC m=+7.977005051" lastFinishedPulling="2025-08-19 08:16:03.179541844 +0000 UTC m=+11.051389803" observedRunningTime="2025-08-19 08:16:03.44068819 +0000 UTC m=+11.312536157" watchObservedRunningTime="2025-08-19 08:16:03.654592984 +0000 UTC m=+11.526440948" Aug 19 08:16:04.041870 update_engine[1508]: I20250819 08:16:04.041615 1508 update_attempter.cc:509] Updating boot flags... Aug 19 08:16:11.004528 sudo[1838]: pam_unix(sudo:session): session closed for user root Aug 19 08:16:11.055762 sshd[1837]: Connection closed by 147.75.109.163 port 44676 Aug 19 08:16:11.056642 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:11.074563 systemd[1]: sshd@6-10.128.0.35:22-147.75.109.163:44676.service: Deactivated successfully. Aug 19 08:16:11.080413 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:16:11.080828 systemd[1]: session-7.scope: Consumed 6.830s CPU time, 225.7M memory peak. Aug 19 08:16:11.084349 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:16:11.089561 systemd-logind[1505]: Removed session 7. Aug 19 08:16:17.457802 systemd[1]: Created slice kubepods-besteffort-poddb977c24_95e9_4d32_b0f3_9767110d86d0.slice - libcontainer container kubepods-besteffort-poddb977c24_95e9_4d32_b0f3_9767110d86d0.slice. Aug 19 08:16:17.495222 kubelet[2755]: I0819 08:16:17.495145 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db977c24-95e9-4d32-b0f3-9767110d86d0-tigera-ca-bundle\") pod \"calico-typha-548dd68f66-k78t9\" (UID: \"db977c24-95e9-4d32-b0f3-9767110d86d0\") " pod="calico-system/calico-typha-548dd68f66-k78t9" Aug 19 08:16:17.498828 kubelet[2755]: I0819 08:16:17.496156 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/db977c24-95e9-4d32-b0f3-9767110d86d0-typha-certs\") pod \"calico-typha-548dd68f66-k78t9\" (UID: \"db977c24-95e9-4d32-b0f3-9767110d86d0\") " pod="calico-system/calico-typha-548dd68f66-k78t9" Aug 19 08:16:17.498828 kubelet[2755]: I0819 08:16:17.496353 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9l7n\" (UniqueName: \"kubernetes.io/projected/db977c24-95e9-4d32-b0f3-9767110d86d0-kube-api-access-b9l7n\") pod \"calico-typha-548dd68f66-k78t9\" (UID: \"db977c24-95e9-4d32-b0f3-9767110d86d0\") " pod="calico-system/calico-typha-548dd68f66-k78t9" Aug 19 08:16:17.756503 systemd[1]: Created slice kubepods-besteffort-pod4977773d_1181_470b_8d93_343d1f3d9a59.slice - libcontainer container kubepods-besteffort-pod4977773d_1181_470b_8d93_343d1f3d9a59.slice. Aug 19 08:16:17.770145 containerd[1576]: time="2025-08-19T08:16:17.770073004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-548dd68f66-k78t9,Uid:db977c24-95e9-4d32-b0f3-9767110d86d0,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:17.799079 kubelet[2755]: I0819 08:16:17.799027 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-cni-log-dir\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.802764 kubelet[2755]: I0819 08:16:17.802374 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-flexvol-driver-host\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.802764 kubelet[2755]: I0819 08:16:17.802433 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-policysync\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.802764 kubelet[2755]: I0819 08:16:17.802462 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh8l2\" (UniqueName: \"kubernetes.io/projected/4977773d-1181-470b-8d93-343d1f3d9a59-kube-api-access-vh8l2\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.802764 kubelet[2755]: I0819 08:16:17.802489 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-cni-bin-dir\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.802764 kubelet[2755]: I0819 08:16:17.802516 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-lib-modules\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.803179 kubelet[2755]: I0819 08:16:17.802544 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4977773d-1181-470b-8d93-343d1f3d9a59-tigera-ca-bundle\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.803179 kubelet[2755]: I0819 08:16:17.802576 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-cni-net-dir\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.803179 kubelet[2755]: I0819 08:16:17.802602 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-var-run-calico\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.803179 kubelet[2755]: I0819 08:16:17.802629 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-xtables-lock\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.803179 kubelet[2755]: I0819 08:16:17.802666 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4977773d-1181-470b-8d93-343d1f3d9a59-node-certs\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.803418 kubelet[2755]: I0819 08:16:17.802695 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4977773d-1181-470b-8d93-343d1f3d9a59-var-lib-calico\") pod \"calico-node-8jtxh\" (UID: \"4977773d-1181-470b-8d93-343d1f3d9a59\") " pod="calico-system/calico-node-8jtxh" Aug 19 08:16:17.828235 containerd[1576]: time="2025-08-19T08:16:17.828142701Z" level=info msg="connecting to shim a23577c94aa00261c515d06b86c36ebab57623569b2b32878ba1bff3256a8e4b" address="unix:///run/containerd/s/12046e1629bcce5924ffe92ff2635e87f10189ac7eb79ec98088cef5cf8bab3b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:17.909219 kubelet[2755]: E0819 08:16:17.909130 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:17.909219 kubelet[2755]: W0819 08:16:17.909169 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:17.910771 kubelet[2755]: E0819 08:16:17.909355 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:17.911042 systemd[1]: Started cri-containerd-a23577c94aa00261c515d06b86c36ebab57623569b2b32878ba1bff3256a8e4b.scope - libcontainer container a23577c94aa00261c515d06b86c36ebab57623569b2b32878ba1bff3256a8e4b. Aug 19 08:16:17.928255 kubelet[2755]: E0819 08:16:17.927605 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:17.928255 kubelet[2755]: W0819 08:16:17.928124 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:17.928255 kubelet[2755]: E0819 08:16:17.928182 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:17.964148 kubelet[2755]: E0819 08:16:17.963993 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:17.964148 kubelet[2755]: W0819 08:16:17.964035 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:17.964148 kubelet[2755]: E0819 08:16:17.964074 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.065846 containerd[1576]: time="2025-08-19T08:16:18.065597326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8jtxh,Uid:4977773d-1181-470b-8d93-343d1f3d9a59,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:18.117372 containerd[1576]: time="2025-08-19T08:16:18.117196679Z" level=info msg="connecting to shim 76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682" address="unix:///run/containerd/s/553ba68f8907c16b92d5edf979f05d2e7a24350de45ae8ee107d114b6318d404" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:18.175586 systemd[1]: Started cri-containerd-76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682.scope - libcontainer container 76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682. Aug 19 08:16:18.272805 kubelet[2755]: E0819 08:16:18.272046 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fl6vv" podUID="a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0" Aug 19 08:16:18.288767 kubelet[2755]: E0819 08:16:18.287812 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.288767 kubelet[2755]: W0819 08:16:18.287850 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.288767 kubelet[2755]: E0819 08:16:18.287888 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.289353 kubelet[2755]: E0819 08:16:18.289327 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.290846 kubelet[2755]: W0819 08:16:18.289771 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.290846 kubelet[2755]: E0819 08:16:18.289813 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.291193 kubelet[2755]: E0819 08:16:18.291055 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.291193 kubelet[2755]: W0819 08:16:18.291090 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.291193 kubelet[2755]: E0819 08:16:18.291112 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.292172 kubelet[2755]: E0819 08:16:18.291966 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.292172 kubelet[2755]: W0819 08:16:18.291986 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.292172 kubelet[2755]: E0819 08:16:18.292006 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.292578 kubelet[2755]: E0819 08:16:18.292560 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.293815 kubelet[2755]: W0819 08:16:18.292674 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.293815 kubelet[2755]: E0819 08:16:18.292701 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.294898 kubelet[2755]: E0819 08:16:18.294388 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.294898 kubelet[2755]: W0819 08:16:18.294406 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.294898 kubelet[2755]: E0819 08:16:18.294426 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.295650 kubelet[2755]: E0819 08:16:18.295417 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.295650 kubelet[2755]: W0819 08:16:18.295549 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.295650 kubelet[2755]: E0819 08:16:18.295574 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.296387 kubelet[2755]: E0819 08:16:18.296210 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.296387 kubelet[2755]: W0819 08:16:18.296276 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.297081 kubelet[2755]: E0819 08:16:18.296299 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.298981 kubelet[2755]: E0819 08:16:18.298905 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.299300 kubelet[2755]: W0819 08:16:18.299167 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.299748 kubelet[2755]: E0819 08:16:18.299390 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.302124 kubelet[2755]: E0819 08:16:18.301868 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.302124 kubelet[2755]: W0819 08:16:18.301890 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.302124 kubelet[2755]: E0819 08:16:18.301909 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.304141 kubelet[2755]: E0819 08:16:18.303949 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.304141 kubelet[2755]: W0819 08:16:18.303969 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.304141 kubelet[2755]: E0819 08:16:18.303989 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.308000 kubelet[2755]: E0819 08:16:18.307961 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.308479 kubelet[2755]: W0819 08:16:18.308139 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.308479 kubelet[2755]: E0819 08:16:18.308318 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.309904 kubelet[2755]: E0819 08:16:18.309882 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.310347 kubelet[2755]: W0819 08:16:18.310000 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.310347 kubelet[2755]: E0819 08:16:18.310027 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.312955 kubelet[2755]: E0819 08:16:18.312489 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.314287 kubelet[2755]: W0819 08:16:18.314042 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.314796 kubelet[2755]: E0819 08:16:18.314247 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.318880 kubelet[2755]: E0819 08:16:18.317126 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.318880 kubelet[2755]: W0819 08:16:18.317148 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.318880 kubelet[2755]: E0819 08:16:18.318802 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.320481 kubelet[2755]: E0819 08:16:18.319889 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.320481 kubelet[2755]: W0819 08:16:18.319938 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.320481 kubelet[2755]: E0819 08:16:18.319963 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.322329 kubelet[2755]: E0819 08:16:18.322137 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.322329 kubelet[2755]: W0819 08:16:18.322156 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.322329 kubelet[2755]: E0819 08:16:18.322177 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.323024 kubelet[2755]: E0819 08:16:18.323004 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.323165 kubelet[2755]: W0819 08:16:18.323146 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.323405 kubelet[2755]: E0819 08:16:18.323251 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.323788 kubelet[2755]: E0819 08:16:18.323769 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.324062 kubelet[2755]: W0819 08:16:18.323932 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.324062 kubelet[2755]: E0819 08:16:18.323961 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.325418 kubelet[2755]: E0819 08:16:18.325351 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.325418 kubelet[2755]: W0819 08:16:18.325368 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.325418 kubelet[2755]: E0819 08:16:18.325385 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.326247 kubelet[2755]: E0819 08:16:18.326181 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.326247 kubelet[2755]: W0819 08:16:18.326201 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.326247 kubelet[2755]: E0819 08:16:18.326219 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.326724 kubelet[2755]: I0819 08:16:18.326473 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxv6\" (UniqueName: \"kubernetes.io/projected/a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0-kube-api-access-txxv6\") pod \"csi-node-driver-fl6vv\" (UID: \"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0\") " pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:18.327477 kubelet[2755]: E0819 08:16:18.327458 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.327849 kubelet[2755]: W0819 08:16:18.327571 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.327849 kubelet[2755]: E0819 08:16:18.327625 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.330614 kubelet[2755]: E0819 08:16:18.330587 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.330614 kubelet[2755]: W0819 08:16:18.330613 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.330872 kubelet[2755]: E0819 08:16:18.330705 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.333191 kubelet[2755]: E0819 08:16:18.333166 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.333191 kubelet[2755]: W0819 08:16:18.333191 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.333439 kubelet[2755]: E0819 08:16:18.333210 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.333439 kubelet[2755]: I0819 08:16:18.333263 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0-varrun\") pod \"csi-node-driver-fl6vv\" (UID: \"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0\") " pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:18.334412 kubelet[2755]: E0819 08:16:18.334198 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.334412 kubelet[2755]: W0819 08:16:18.334224 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.334412 kubelet[2755]: E0819 08:16:18.334255 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.334412 kubelet[2755]: I0819 08:16:18.334296 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0-registration-dir\") pod \"csi-node-driver-fl6vv\" (UID: \"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0\") " pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:18.336174 kubelet[2755]: E0819 08:16:18.336098 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.336324 kubelet[2755]: W0819 08:16:18.336307 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.336430 kubelet[2755]: E0819 08:16:18.336413 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.337798 kubelet[2755]: E0819 08:16:18.336912 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.337798 kubelet[2755]: W0819 08:16:18.336935 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.337798 kubelet[2755]: E0819 08:16:18.336952 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.337798 kubelet[2755]: I0819 08:16:18.336985 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0-kubelet-dir\") pod \"csi-node-driver-fl6vv\" (UID: \"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0\") " pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:18.338383 kubelet[2755]: E0819 08:16:18.338345 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.338929 kubelet[2755]: W0819 08:16:18.338894 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.339030 kubelet[2755]: E0819 08:16:18.338937 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.339690 kubelet[2755]: E0819 08:16:18.339665 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.339690 kubelet[2755]: W0819 08:16:18.339688 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.339860 kubelet[2755]: E0819 08:16:18.339706 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.340962 kubelet[2755]: E0819 08:16:18.340867 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.340962 kubelet[2755]: W0819 08:16:18.340890 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.340962 kubelet[2755]: E0819 08:16:18.340909 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.340962 kubelet[2755]: I0819 08:16:18.340940 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0-socket-dir\") pod \"csi-node-driver-fl6vv\" (UID: \"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0\") " pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:18.342660 kubelet[2755]: E0819 08:16:18.342533 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.342660 kubelet[2755]: W0819 08:16:18.342556 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.342660 kubelet[2755]: E0819 08:16:18.342575 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.343882 kubelet[2755]: E0819 08:16:18.342924 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.343882 kubelet[2755]: W0819 08:16:18.342938 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.343882 kubelet[2755]: E0819 08:16:18.342954 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.343882 kubelet[2755]: E0819 08:16:18.343230 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.343882 kubelet[2755]: W0819 08:16:18.343243 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.343882 kubelet[2755]: E0819 08:16:18.343257 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.344175 kubelet[2755]: E0819 08:16:18.344103 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.344175 kubelet[2755]: W0819 08:16:18.344119 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.344175 kubelet[2755]: E0819 08:16:18.344136 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.346851 kubelet[2755]: E0819 08:16:18.344896 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.346851 kubelet[2755]: W0819 08:16:18.344916 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.346851 kubelet[2755]: E0819 08:16:18.344933 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.384642 containerd[1576]: time="2025-08-19T08:16:18.384566731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8jtxh,Uid:4977773d-1181-470b-8d93-343d1f3d9a59,Namespace:calico-system,Attempt:0,} returns sandbox id \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\"" Aug 19 08:16:18.390811 containerd[1576]: time="2025-08-19T08:16:18.390684626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 19 08:16:18.392434 containerd[1576]: time="2025-08-19T08:16:18.392323272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-548dd68f66-k78t9,Uid:db977c24-95e9-4d32-b0f3-9767110d86d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"a23577c94aa00261c515d06b86c36ebab57623569b2b32878ba1bff3256a8e4b\"" Aug 19 08:16:18.446403 kubelet[2755]: E0819 08:16:18.446271 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.446403 kubelet[2755]: W0819 08:16:18.446314 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.446403 kubelet[2755]: E0819 08:16:18.446354 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.447332 kubelet[2755]: E0819 08:16:18.447294 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.447975 kubelet[2755]: W0819 08:16:18.447862 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.447975 kubelet[2755]: E0819 08:16:18.447932 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.448528 kubelet[2755]: E0819 08:16:18.448481 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.448528 kubelet[2755]: W0819 08:16:18.448501 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.448974 kubelet[2755]: E0819 08:16:18.448713 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.449397 kubelet[2755]: E0819 08:16:18.449376 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.449719 kubelet[2755]: W0819 08:16:18.449505 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.450167 kubelet[2755]: E0819 08:16:18.450042 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.451260 kubelet[2755]: E0819 08:16:18.451152 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.451260 kubelet[2755]: W0819 08:16:18.451230 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.451694 kubelet[2755]: E0819 08:16:18.451548 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.452349 kubelet[2755]: E0819 08:16:18.452333 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.452937 kubelet[2755]: W0819 08:16:18.452445 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.453452 kubelet[2755]: E0819 08:16:18.453335 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.453452 kubelet[2755]: W0819 08:16:18.453379 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.454076 kubelet[2755]: E0819 08:16:18.453716 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.454451 kubelet[2755]: E0819 08:16:18.454384 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.454941 kubelet[2755]: E0819 08:16:18.454799 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.454941 kubelet[2755]: W0819 08:16:18.454911 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.455293 kubelet[2755]: E0819 08:16:18.455086 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.456260 kubelet[2755]: E0819 08:16:18.456238 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.456753 kubelet[2755]: W0819 08:16:18.456675 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.456955 kubelet[2755]: E0819 08:16:18.456710 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.458056 kubelet[2755]: E0819 08:16:18.457883 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.458056 kubelet[2755]: W0819 08:16:18.457903 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.458056 kubelet[2755]: E0819 08:16:18.457920 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.458580 kubelet[2755]: E0819 08:16:18.458539 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.458580 kubelet[2755]: W0819 08:16:18.458557 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.459919 kubelet[2755]: E0819 08:16:18.459901 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.460162 kubelet[2755]: W0819 08:16:18.460065 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.460162 kubelet[2755]: E0819 08:16:18.460092 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.460697 kubelet[2755]: E0819 08:16:18.460033 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.461291 kubelet[2755]: E0819 08:16:18.461273 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.461818 kubelet[2755]: W0819 08:16:18.461382 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.462029 kubelet[2755]: E0819 08:16:18.462011 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.462391 kubelet[2755]: E0819 08:16:18.462373 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.463780 kubelet[2755]: W0819 08:16:18.462495 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.464037 kubelet[2755]: E0819 08:16:18.463915 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.464259 kubelet[2755]: E0819 08:16:18.464217 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.464259 kubelet[2755]: W0819 08:16:18.464236 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.464482 kubelet[2755]: E0819 08:16:18.464463 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.466022 kubelet[2755]: E0819 08:16:18.465979 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.466022 kubelet[2755]: W0819 08:16:18.465999 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.466293 kubelet[2755]: E0819 08:16:18.466258 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.466689 kubelet[2755]: E0819 08:16:18.466647 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.466689 kubelet[2755]: W0819 08:16:18.466666 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.467002 kubelet[2755]: E0819 08:16:18.466929 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.467427 kubelet[2755]: E0819 08:16:18.467408 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.467889 kubelet[2755]: W0819 08:16:18.467716 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.467889 kubelet[2755]: E0819 08:16:18.467843 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.468476 kubelet[2755]: E0819 08:16:18.468446 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.468667 kubelet[2755]: W0819 08:16:18.468543 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.469023 kubelet[2755]: E0819 08:16:18.468883 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.469730 kubelet[2755]: E0819 08:16:18.469677 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.469730 kubelet[2755]: W0819 08:16:18.469697 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.470505 kubelet[2755]: E0819 08:16:18.470212 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.471938 kubelet[2755]: E0819 08:16:18.471875 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.472178 kubelet[2755]: W0819 08:16:18.472154 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.472968 kubelet[2755]: E0819 08:16:18.472792 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.473399 kubelet[2755]: E0819 08:16:18.473382 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.474172 kubelet[2755]: W0819 08:16:18.474100 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.474366 kubelet[2755]: E0819 08:16:18.474268 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.475014 kubelet[2755]: E0819 08:16:18.474971 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.475014 kubelet[2755]: W0819 08:16:18.474990 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.475343 kubelet[2755]: E0819 08:16:18.475263 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.476196 kubelet[2755]: E0819 08:16:18.476145 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.476469 kubelet[2755]: W0819 08:16:18.476450 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.476720 kubelet[2755]: E0819 08:16:18.476635 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.477755 kubelet[2755]: E0819 08:16:18.477640 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.477755 kubelet[2755]: W0819 08:16:18.477663 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.477755 kubelet[2755]: E0819 08:16:18.477682 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:18.496808 kubelet[2755]: E0819 08:16:18.496363 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 19 08:16:18.499550 kubelet[2755]: W0819 08:16:18.497782 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 19 08:16:18.499550 kubelet[2755]: E0819 08:16:18.497843 2755 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 19 08:16:19.442685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount912271678.mount: Deactivated successfully. Aug 19 08:16:19.599723 containerd[1576]: time="2025-08-19T08:16:19.599628222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:19.601392 containerd[1576]: time="2025-08-19T08:16:19.601328737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 19 08:16:19.603413 containerd[1576]: time="2025-08-19T08:16:19.603185612Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:19.607381 containerd[1576]: time="2025-08-19T08:16:19.607308250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:19.608386 containerd[1576]: time="2025-08-19T08:16:19.608166116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.216855922s" Aug 19 08:16:19.608386 containerd[1576]: time="2025-08-19T08:16:19.608215811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 19 08:16:19.610374 containerd[1576]: time="2025-08-19T08:16:19.610008578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 19 08:16:19.613816 containerd[1576]: time="2025-08-19T08:16:19.613720532Z" level=info msg="CreateContainer within sandbox \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 19 08:16:19.627984 containerd[1576]: time="2025-08-19T08:16:19.627927921Z" level=info msg="Container ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:19.646905 containerd[1576]: time="2025-08-19T08:16:19.646770343Z" level=info msg="CreateContainer within sandbox \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\"" Aug 19 08:16:19.648374 containerd[1576]: time="2025-08-19T08:16:19.648331630Z" level=info msg="StartContainer for \"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\"" Aug 19 08:16:19.653995 containerd[1576]: time="2025-08-19T08:16:19.653867891Z" level=info msg="connecting to shim ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec" address="unix:///run/containerd/s/553ba68f8907c16b92d5edf979f05d2e7a24350de45ae8ee107d114b6318d404" protocol=ttrpc version=3 Aug 19 08:16:19.691060 systemd[1]: Started cri-containerd-ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec.scope - libcontainer container ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec. Aug 19 08:16:19.755130 containerd[1576]: time="2025-08-19T08:16:19.754980078Z" level=info msg="StartContainer for \"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\" returns successfully" Aug 19 08:16:19.777265 systemd[1]: cri-containerd-ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec.scope: Deactivated successfully. Aug 19 08:16:19.782664 containerd[1576]: time="2025-08-19T08:16:19.782558571Z" level=info msg="received exit event container_id:\"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\" id:\"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\" pid:3366 exited_at:{seconds:1755591379 nanos:781136968}" Aug 19 08:16:19.783077 containerd[1576]: time="2025-08-19T08:16:19.782635297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\" id:\"ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec\" pid:3366 exited_at:{seconds:1755591379 nanos:781136968}" Aug 19 08:16:19.836730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7061b0b3bae632fe8df275bd55b6672ef1e5ffe2bac864364533e49628b6ec-rootfs.mount: Deactivated successfully. Aug 19 08:16:20.342215 kubelet[2755]: E0819 08:16:20.342142 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fl6vv" podUID="a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0" Aug 19 08:16:22.215499 containerd[1576]: time="2025-08-19T08:16:22.215410680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:22.217080 containerd[1576]: time="2025-08-19T08:16:22.217009730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Aug 19 08:16:22.218748 containerd[1576]: time="2025-08-19T08:16:22.218585504Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:22.221711 containerd[1576]: time="2025-08-19T08:16:22.221638567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:22.223070 containerd[1576]: time="2025-08-19T08:16:22.222578962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.612031226s" Aug 19 08:16:22.223070 containerd[1576]: time="2025-08-19T08:16:22.222649616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 19 08:16:22.224969 containerd[1576]: time="2025-08-19T08:16:22.224932322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 19 08:16:22.249234 containerd[1576]: time="2025-08-19T08:16:22.249169056Z" level=info msg="CreateContainer within sandbox \"a23577c94aa00261c515d06b86c36ebab57623569b2b32878ba1bff3256a8e4b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 19 08:16:22.265372 containerd[1576]: time="2025-08-19T08:16:22.263248449Z" level=info msg="Container e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:22.268495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656277560.mount: Deactivated successfully. Aug 19 08:16:22.282403 containerd[1576]: time="2025-08-19T08:16:22.282321507Z" level=info msg="CreateContainer within sandbox \"a23577c94aa00261c515d06b86c36ebab57623569b2b32878ba1bff3256a8e4b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e\"" Aug 19 08:16:22.283720 containerd[1576]: time="2025-08-19T08:16:22.283457979Z" level=info msg="StartContainer for \"e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e\"" Aug 19 08:16:22.286867 containerd[1576]: time="2025-08-19T08:16:22.285774065Z" level=info msg="connecting to shim e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e" address="unix:///run/containerd/s/12046e1629bcce5924ffe92ff2635e87f10189ac7eb79ec98088cef5cf8bab3b" protocol=ttrpc version=3 Aug 19 08:16:22.327237 systemd[1]: Started cri-containerd-e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e.scope - libcontainer container e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e. Aug 19 08:16:22.342652 kubelet[2755]: E0819 08:16:22.342580 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fl6vv" podUID="a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0" Aug 19 08:16:22.421198 containerd[1576]: time="2025-08-19T08:16:22.421006251Z" level=info msg="StartContainer for \"e57238f5b8a3bff029f737f9bd6fbd7cbda0aa0a6f1d26306f778b044915771e\" returns successfully" Aug 19 08:16:23.534430 kubelet[2755]: I0819 08:16:23.534372 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:16:24.341979 kubelet[2755]: E0819 08:16:24.341899 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fl6vv" podUID="a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0" Aug 19 08:16:25.451619 containerd[1576]: time="2025-08-19T08:16:25.451537866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:25.453434 containerd[1576]: time="2025-08-19T08:16:25.453380150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 19 08:16:25.455258 containerd[1576]: time="2025-08-19T08:16:25.455217919Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:25.460537 containerd[1576]: time="2025-08-19T08:16:25.460432843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:25.461961 containerd[1576]: time="2025-08-19T08:16:25.461761987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.236762497s" Aug 19 08:16:25.461961 containerd[1576]: time="2025-08-19T08:16:25.461809352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 19 08:16:25.465967 containerd[1576]: time="2025-08-19T08:16:25.465928122Z" level=info msg="CreateContainer within sandbox \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 19 08:16:25.480986 containerd[1576]: time="2025-08-19T08:16:25.480928575Z" level=info msg="Container 657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:25.495663 containerd[1576]: time="2025-08-19T08:16:25.495577329Z" level=info msg="CreateContainer within sandbox \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\"" Aug 19 08:16:25.497168 containerd[1576]: time="2025-08-19T08:16:25.496801778Z" level=info msg="StartContainer for \"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\"" Aug 19 08:16:25.500763 containerd[1576]: time="2025-08-19T08:16:25.500622580Z" level=info msg="connecting to shim 657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1" address="unix:///run/containerd/s/553ba68f8907c16b92d5edf979f05d2e7a24350de45ae8ee107d114b6318d404" protocol=ttrpc version=3 Aug 19 08:16:25.541450 systemd[1]: Started cri-containerd-657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1.scope - libcontainer container 657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1. Aug 19 08:16:25.615761 containerd[1576]: time="2025-08-19T08:16:25.615651313Z" level=info msg="StartContainer for \"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\" returns successfully" Aug 19 08:16:26.341410 kubelet[2755]: E0819 08:16:26.340953 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fl6vv" podUID="a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0" Aug 19 08:16:26.583425 kubelet[2755]: I0819 08:16:26.583321 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-548dd68f66-k78t9" podStartSLOduration=5.754998547 podStartE2EDuration="9.583217125s" podCreationTimestamp="2025-08-19 08:16:17 +0000 UTC" firstStartedPulling="2025-08-19 08:16:18.396458931 +0000 UTC m=+26.268306878" lastFinishedPulling="2025-08-19 08:16:22.224677495 +0000 UTC m=+30.096525456" observedRunningTime="2025-08-19 08:16:22.569522569 +0000 UTC m=+30.441370531" watchObservedRunningTime="2025-08-19 08:16:26.583217125 +0000 UTC m=+34.455065090" Aug 19 08:16:26.640364 systemd[1]: cri-containerd-657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1.scope: Deactivated successfully. Aug 19 08:16:26.642484 containerd[1576]: time="2025-08-19T08:16:26.641621335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\" id:\"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\" pid:3465 exited_at:{seconds:1755591386 nanos:641167899}" Aug 19 08:16:26.642484 containerd[1576]: time="2025-08-19T08:16:26.641897979Z" level=info msg="received exit event container_id:\"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\" id:\"657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1\" pid:3465 exited_at:{seconds:1755591386 nanos:641167899}" Aug 19 08:16:26.641890 systemd[1]: cri-containerd-657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1.scope: Consumed 673ms CPU time, 190.8M memory peak, 171.2M written to disk. Aug 19 08:16:26.682457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-657021d114b6cbfebe22c8726335dd1db03c5fdf5daa53e8624ab6b57756abc1-rootfs.mount: Deactivated successfully. Aug 19 08:16:26.701804 kubelet[2755]: I0819 08:16:26.701765 2755 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 19 08:16:26.775535 systemd[1]: Created slice kubepods-burstable-pod2664659c_350b_4ef1_b3a4_6af0b6794bcf.slice - libcontainer container kubepods-burstable-pod2664659c_350b_4ef1_b3a4_6af0b6794bcf.slice. Aug 19 08:16:26.794065 kubelet[2755]: W0819 08:16:26.794018 2755 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' and this object Aug 19 08:16:26.794372 kubelet[2755]: E0819 08:16:26.794334 2755 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 19 08:16:26.794697 kubelet[2755]: W0819 08:16:26.794671 2755 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' and this object Aug 19 08:16:26.794896 kubelet[2755]: E0819 08:16:26.794867 2755 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 19 08:16:26.795155 kubelet[2755]: W0819 08:16:26.795130 2755 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' and this object Aug 19 08:16:26.795340 kubelet[2755]: E0819 08:16:26.795293 2755 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' and this object" logger="UnhandledError" Aug 19 08:16:26.805287 systemd[1]: Created slice kubepods-burstable-pod3005f44f_eabf_4645_adf8_5e4625b7b0eb.slice - libcontainer container kubepods-burstable-pod3005f44f_eabf_4645_adf8_5e4625b7b0eb.slice. Aug 19 08:16:26.828102 systemd[1]: Created slice kubepods-besteffort-pod291128e7_663d_4ca0_8c29_b03f9e74e30e.slice - libcontainer container kubepods-besteffort-pod291128e7_663d_4ca0_8c29_b03f9e74e30e.slice. Aug 19 08:16:26.829722 kubelet[2755]: I0819 08:16:26.829642 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7478dd9c-2565-4ce7-bce4-6692081159c8-goldmane-key-pair\") pod \"goldmane-58fd7646b9-r4ttv\" (UID: \"7478dd9c-2565-4ce7-bce4-6692081159c8\") " pod="calico-system/goldmane-58fd7646b9-r4ttv" Aug 19 08:16:26.831389 kubelet[2755]: I0819 08:16:26.831283 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291128e7-663d-4ca0-8c29-b03f9e74e30e-tigera-ca-bundle\") pod \"calico-kube-controllers-b956ccb5d-k9v2m\" (UID: \"291128e7-663d-4ca0-8c29-b03f9e74e30e\") " pod="calico-system/calico-kube-controllers-b956ccb5d-k9v2m" Aug 19 08:16:26.832850 kubelet[2755]: I0819 08:16:26.831595 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhkx\" (UniqueName: \"kubernetes.io/projected/2664659c-350b-4ef1-b3a4-6af0b6794bcf-kube-api-access-tkhkx\") pod \"coredns-7c65d6cfc9-rz9fq\" (UID: \"2664659c-350b-4ef1-b3a4-6af0b6794bcf\") " pod="kube-system/coredns-7c65d6cfc9-rz9fq" Aug 19 08:16:26.833206 kubelet[2755]: I0819 08:16:26.833180 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5b4\" (UniqueName: \"kubernetes.io/projected/291128e7-663d-4ca0-8c29-b03f9e74e30e-kube-api-access-vc5b4\") pod \"calico-kube-controllers-b956ccb5d-k9v2m\" (UID: \"291128e7-663d-4ca0-8c29-b03f9e74e30e\") " pod="calico-system/calico-kube-controllers-b956ccb5d-k9v2m" Aug 19 08:16:26.833535 kubelet[2755]: I0819 08:16:26.833310 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdmp\" (UniqueName: \"kubernetes.io/projected/3005f44f-eabf-4645-adf8-5e4625b7b0eb-kube-api-access-zzdmp\") pod \"coredns-7c65d6cfc9-99mdx\" (UID: \"3005f44f-eabf-4645-adf8-5e4625b7b0eb\") " pod="kube-system/coredns-7c65d6cfc9-99mdx" Aug 19 08:16:26.834133 kubelet[2755]: I0819 08:16:26.833925 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7478dd9c-2565-4ce7-bce4-6692081159c8-config\") pod \"goldmane-58fd7646b9-r4ttv\" (UID: \"7478dd9c-2565-4ce7-bce4-6692081159c8\") " pod="calico-system/goldmane-58fd7646b9-r4ttv" Aug 19 08:16:26.834133 kubelet[2755]: I0819 08:16:26.834093 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-backend-key-pair\") pod \"whisker-86c4b6bbd6-qslwz\" (UID: \"efbfd7b7-201e-404e-a212-64e1a3740554\") " pod="calico-system/whisker-86c4b6bbd6-qslwz" Aug 19 08:16:26.835541 kubelet[2755]: I0819 08:16:26.834619 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a-calico-apiserver-certs\") pod \"calico-apiserver-84b5f88759-fhjwh\" (UID: \"c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a\") " pod="calico-apiserver/calico-apiserver-84b5f88759-fhjwh" Aug 19 08:16:26.835541 kubelet[2755]: I0819 08:16:26.835285 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr4r\" (UniqueName: \"kubernetes.io/projected/c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a-kube-api-access-bhr4r\") pod \"calico-apiserver-84b5f88759-fhjwh\" (UID: \"c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a\") " pod="calico-apiserver/calico-apiserver-84b5f88759-fhjwh" Aug 19 08:16:26.835541 kubelet[2755]: I0819 08:16:26.835315 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b54fw\" (UniqueName: \"kubernetes.io/projected/efbfd7b7-201e-404e-a212-64e1a3740554-kube-api-access-b54fw\") pod \"whisker-86c4b6bbd6-qslwz\" (UID: \"efbfd7b7-201e-404e-a212-64e1a3740554\") " pod="calico-system/whisker-86c4b6bbd6-qslwz" Aug 19 08:16:26.835541 kubelet[2755]: I0819 08:16:26.835354 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7478dd9c-2565-4ce7-bce4-6692081159c8-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-r4ttv\" (UID: \"7478dd9c-2565-4ce7-bce4-6692081159c8\") " pod="calico-system/goldmane-58fd7646b9-r4ttv" Aug 19 08:16:26.835541 kubelet[2755]: I0819 08:16:26.835392 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fv8\" (UniqueName: \"kubernetes.io/projected/6f94de95-4727-4949-9336-2ec5fea08c40-kube-api-access-29fv8\") pod \"calico-apiserver-84b5f88759-wqrqx\" (UID: \"6f94de95-4727-4949-9336-2ec5fea08c40\") " pod="calico-apiserver/calico-apiserver-84b5f88759-wqrqx" Aug 19 08:16:26.835994 kubelet[2755]: I0819 08:16:26.835421 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2664659c-350b-4ef1-b3a4-6af0b6794bcf-config-volume\") pod \"coredns-7c65d6cfc9-rz9fq\" (UID: \"2664659c-350b-4ef1-b3a4-6af0b6794bcf\") " pod="kube-system/coredns-7c65d6cfc9-rz9fq" Aug 19 08:16:26.835994 kubelet[2755]: I0819 08:16:26.835451 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3005f44f-eabf-4645-adf8-5e4625b7b0eb-config-volume\") pod \"coredns-7c65d6cfc9-99mdx\" (UID: \"3005f44f-eabf-4645-adf8-5e4625b7b0eb\") " pod="kube-system/coredns-7c65d6cfc9-99mdx" Aug 19 08:16:26.835994 kubelet[2755]: I0819 08:16:26.835479 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f94de95-4727-4949-9336-2ec5fea08c40-calico-apiserver-certs\") pod \"calico-apiserver-84b5f88759-wqrqx\" (UID: \"6f94de95-4727-4949-9336-2ec5fea08c40\") " pod="calico-apiserver/calico-apiserver-84b5f88759-wqrqx" Aug 19 08:16:26.835994 kubelet[2755]: I0819 08:16:26.835522 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485v8\" (UniqueName: \"kubernetes.io/projected/7478dd9c-2565-4ce7-bce4-6692081159c8-kube-api-access-485v8\") pod \"goldmane-58fd7646b9-r4ttv\" (UID: \"7478dd9c-2565-4ce7-bce4-6692081159c8\") " pod="calico-system/goldmane-58fd7646b9-r4ttv" Aug 19 08:16:26.835994 kubelet[2755]: I0819 08:16:26.835556 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-ca-bundle\") pod \"whisker-86c4b6bbd6-qslwz\" (UID: \"efbfd7b7-201e-404e-a212-64e1a3740554\") " pod="calico-system/whisker-86c4b6bbd6-qslwz" Aug 19 08:16:26.847448 systemd[1]: Created slice kubepods-besteffort-pod6f94de95_4727_4949_9336_2ec5fea08c40.slice - libcontainer container kubepods-besteffort-pod6f94de95_4727_4949_9336_2ec5fea08c40.slice. Aug 19 08:16:26.861857 systemd[1]: Created slice kubepods-besteffort-podc5bc8e1a_e7cd_498c_8d99_f5b7268e3e8a.slice - libcontainer container kubepods-besteffort-podc5bc8e1a_e7cd_498c_8d99_f5b7268e3e8a.slice. Aug 19 08:16:26.878345 systemd[1]: Created slice kubepods-besteffort-podefbfd7b7_201e_404e_a212_64e1a3740554.slice - libcontainer container kubepods-besteffort-podefbfd7b7_201e_404e_a212_64e1a3740554.slice. Aug 19 08:16:26.889956 systemd[1]: Created slice kubepods-besteffort-pod7478dd9c_2565_4ce7_bce4_6692081159c8.slice - libcontainer container kubepods-besteffort-pod7478dd9c_2565_4ce7_bce4_6692081159c8.slice. Aug 19 08:16:27.094497 containerd[1576]: time="2025-08-19T08:16:27.094416982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rz9fq,Uid:2664659c-350b-4ef1-b3a4-6af0b6794bcf,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:27.120259 containerd[1576]: time="2025-08-19T08:16:27.120185934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99mdx,Uid:3005f44f-eabf-4645-adf8-5e4625b7b0eb,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:27.139173 containerd[1576]: time="2025-08-19T08:16:27.139019183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b956ccb5d-k9v2m,Uid:291128e7-663d-4ca0-8c29-b03f9e74e30e,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:27.196362 containerd[1576]: time="2025-08-19T08:16:27.196163632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-r4ttv,Uid:7478dd9c-2565-4ce7-bce4-6692081159c8,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:27.847895 containerd[1576]: time="2025-08-19T08:16:27.847288419Z" level=error msg="Failed to destroy network for sandbox \"c2a7b6c79b3f1b0221b56ade39f88edb53617d36ea7570e84fdc9d73b5e56caf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.860355 systemd[1]: run-netns-cni\x2df053a72b\x2da7d0\x2d74d5\x2de3e5\x2d781e06577c5b.mount: Deactivated successfully. Aug 19 08:16:27.866687 containerd[1576]: time="2025-08-19T08:16:27.866521415Z" level=error msg="Failed to destroy network for sandbox \"d156fac85ea5c05c31d14be898e450cf1e37ab640ddb72886c6df9bcc2800459\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.869543 containerd[1576]: time="2025-08-19T08:16:27.868125406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99mdx,Uid:3005f44f-eabf-4645-adf8-5e4625b7b0eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a7b6c79b3f1b0221b56ade39f88edb53617d36ea7570e84fdc9d73b5e56caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.869748 kubelet[2755]: E0819 08:16:27.869619 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a7b6c79b3f1b0221b56ade39f88edb53617d36ea7570e84fdc9d73b5e56caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.869748 kubelet[2755]: E0819 08:16:27.869709 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a7b6c79b3f1b0221b56ade39f88edb53617d36ea7570e84fdc9d73b5e56caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-99mdx" Aug 19 08:16:27.872425 kubelet[2755]: E0819 08:16:27.869758 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a7b6c79b3f1b0221b56ade39f88edb53617d36ea7570e84fdc9d73b5e56caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-99mdx" Aug 19 08:16:27.872425 kubelet[2755]: E0819 08:16:27.869833 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-99mdx_kube-system(3005f44f-eabf-4645-adf8-5e4625b7b0eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-99mdx_kube-system(3005f44f-eabf-4645-adf8-5e4625b7b0eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a7b6c79b3f1b0221b56ade39f88edb53617d36ea7570e84fdc9d73b5e56caf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-99mdx" podUID="3005f44f-eabf-4645-adf8-5e4625b7b0eb" Aug 19 08:16:27.871691 systemd[1]: run-netns-cni\x2dcad9e1ac\x2d3a33\x2d2275\x2d660a\x2da460b07bd3d8.mount: Deactivated successfully. Aug 19 08:16:27.875028 containerd[1576]: time="2025-08-19T08:16:27.874984567Z" level=error msg="Failed to destroy network for sandbox \"faa5c435278500e336b120733c2d9b32456d06606e898ccb96706b6eea1f5e26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.875767 containerd[1576]: time="2025-08-19T08:16:27.875283645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rz9fq,Uid:2664659c-350b-4ef1-b3a4-6af0b6794bcf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d156fac85ea5c05c31d14be898e450cf1e37ab640ddb72886c6df9bcc2800459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.876610 kubelet[2755]: E0819 08:16:27.876526 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d156fac85ea5c05c31d14be898e450cf1e37ab640ddb72886c6df9bcc2800459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.876725 kubelet[2755]: E0819 08:16:27.876605 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d156fac85ea5c05c31d14be898e450cf1e37ab640ddb72886c6df9bcc2800459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rz9fq" Aug 19 08:16:27.876725 kubelet[2755]: E0819 08:16:27.876637 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d156fac85ea5c05c31d14be898e450cf1e37ab640ddb72886c6df9bcc2800459\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rz9fq" Aug 19 08:16:27.877908 kubelet[2755]: E0819 08:16:27.876989 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rz9fq_kube-system(2664659c-350b-4ef1-b3a4-6af0b6794bcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rz9fq_kube-system(2664659c-350b-4ef1-b3a4-6af0b6794bcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d156fac85ea5c05c31d14be898e450cf1e37ab640ddb72886c6df9bcc2800459\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rz9fq" podUID="2664659c-350b-4ef1-b3a4-6af0b6794bcf" Aug 19 08:16:27.878620 containerd[1576]: time="2025-08-19T08:16:27.878470830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b956ccb5d-k9v2m,Uid:291128e7-663d-4ca0-8c29-b03f9e74e30e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa5c435278500e336b120733c2d9b32456d06606e898ccb96706b6eea1f5e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.879866 kubelet[2755]: E0819 08:16:27.879021 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa5c435278500e336b120733c2d9b32456d06606e898ccb96706b6eea1f5e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.879866 kubelet[2755]: E0819 08:16:27.879083 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa5c435278500e336b120733c2d9b32456d06606e898ccb96706b6eea1f5e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b956ccb5d-k9v2m" Aug 19 08:16:27.879866 kubelet[2755]: E0819 08:16:27.879115 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa5c435278500e336b120733c2d9b32456d06606e898ccb96706b6eea1f5e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b956ccb5d-k9v2m" Aug 19 08:16:27.880044 kubelet[2755]: E0819 08:16:27.879164 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b956ccb5d-k9v2m_calico-system(291128e7-663d-4ca0-8c29-b03f9e74e30e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b956ccb5d-k9v2m_calico-system(291128e7-663d-4ca0-8c29-b03f9e74e30e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faa5c435278500e336b120733c2d9b32456d06606e898ccb96706b6eea1f5e26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b956ccb5d-k9v2m" podUID="291128e7-663d-4ca0-8c29-b03f9e74e30e" Aug 19 08:16:27.892636 containerd[1576]: time="2025-08-19T08:16:27.892568946Z" level=error msg="Failed to destroy network for sandbox \"b425e66c8070550f627501cd8b9a79ef91d5515883968e9b0e698f70b31b7feb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.894373 containerd[1576]: time="2025-08-19T08:16:27.894318209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-r4ttv,Uid:7478dd9c-2565-4ce7-bce4-6692081159c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b425e66c8070550f627501cd8b9a79ef91d5515883968e9b0e698f70b31b7feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.894956 kubelet[2755]: E0819 08:16:27.894601 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b425e66c8070550f627501cd8b9a79ef91d5515883968e9b0e698f70b31b7feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:27.894956 kubelet[2755]: E0819 08:16:27.894675 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b425e66c8070550f627501cd8b9a79ef91d5515883968e9b0e698f70b31b7feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-r4ttv" Aug 19 08:16:27.894956 kubelet[2755]: E0819 08:16:27.894699 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b425e66c8070550f627501cd8b9a79ef91d5515883968e9b0e698f70b31b7feb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-r4ttv" Aug 19 08:16:27.895199 kubelet[2755]: E0819 08:16:27.894774 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-r4ttv_calico-system(7478dd9c-2565-4ce7-bce4-6692081159c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-r4ttv_calico-system(7478dd9c-2565-4ce7-bce4-6692081159c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b425e66c8070550f627501cd8b9a79ef91d5515883968e9b0e698f70b31b7feb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-r4ttv" podUID="7478dd9c-2565-4ce7-bce4-6692081159c8" Aug 19 08:16:28.055274 containerd[1576]: time="2025-08-19T08:16:28.055206114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-wqrqx,Uid:6f94de95-4727-4949-9336-2ec5fea08c40,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:16:28.070775 containerd[1576]: time="2025-08-19T08:16:28.069898251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-fhjwh,Uid:c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:16:28.090628 containerd[1576]: time="2025-08-19T08:16:28.090573308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c4b6bbd6-qslwz,Uid:efbfd7b7-201e-404e-a212-64e1a3740554,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:28.203193 containerd[1576]: time="2025-08-19T08:16:28.203013901Z" level=error msg="Failed to destroy network for sandbox \"47e3b09d04460e973d4b53854d485b58cfbb8eb7a12c42fbd4870265dadeb00b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.206661 containerd[1576]: time="2025-08-19T08:16:28.206593522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-wqrqx,Uid:6f94de95-4727-4949-9336-2ec5fea08c40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e3b09d04460e973d4b53854d485b58cfbb8eb7a12c42fbd4870265dadeb00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.209427 kubelet[2755]: E0819 08:16:28.209342 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e3b09d04460e973d4b53854d485b58cfbb8eb7a12c42fbd4870265dadeb00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.209567 kubelet[2755]: E0819 08:16:28.209478 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e3b09d04460e973d4b53854d485b58cfbb8eb7a12c42fbd4870265dadeb00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b5f88759-wqrqx" Aug 19 08:16:28.209567 kubelet[2755]: E0819 08:16:28.209543 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e3b09d04460e973d4b53854d485b58cfbb8eb7a12c42fbd4870265dadeb00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b5f88759-wqrqx" Aug 19 08:16:28.209821 kubelet[2755]: E0819 08:16:28.209778 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b5f88759-wqrqx_calico-apiserver(6f94de95-4727-4949-9336-2ec5fea08c40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b5f88759-wqrqx_calico-apiserver(6f94de95-4727-4949-9336-2ec5fea08c40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47e3b09d04460e973d4b53854d485b58cfbb8eb7a12c42fbd4870265dadeb00b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b5f88759-wqrqx" podUID="6f94de95-4727-4949-9336-2ec5fea08c40" Aug 19 08:16:28.224753 containerd[1576]: time="2025-08-19T08:16:28.224653656Z" level=error msg="Failed to destroy network for sandbox \"681799a600bbc36468740de6261b8d2ac274a4a653c5fd05d3928d66fec49e2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.226227 containerd[1576]: time="2025-08-19T08:16:28.226180506Z" level=error msg="Failed to destroy network for sandbox \"2b54bd4ea2592a9cbb883228bcc2e4cab51047f440d6ea1bf428c9eeb7285684\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.226592 containerd[1576]: time="2025-08-19T08:16:28.226537801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c4b6bbd6-qslwz,Uid:efbfd7b7-201e-404e-a212-64e1a3740554,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"681799a600bbc36468740de6261b8d2ac274a4a653c5fd05d3928d66fec49e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.227094 kubelet[2755]: E0819 08:16:28.227031 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681799a600bbc36468740de6261b8d2ac274a4a653c5fd05d3928d66fec49e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.227322 kubelet[2755]: E0819 08:16:28.227115 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681799a600bbc36468740de6261b8d2ac274a4a653c5fd05d3928d66fec49e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86c4b6bbd6-qslwz" Aug 19 08:16:28.227322 kubelet[2755]: E0819 08:16:28.227149 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681799a600bbc36468740de6261b8d2ac274a4a653c5fd05d3928d66fec49e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86c4b6bbd6-qslwz" Aug 19 08:16:28.227322 kubelet[2755]: E0819 08:16:28.227212 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86c4b6bbd6-qslwz_calico-system(efbfd7b7-201e-404e-a212-64e1a3740554)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86c4b6bbd6-qslwz_calico-system(efbfd7b7-201e-404e-a212-64e1a3740554)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"681799a600bbc36468740de6261b8d2ac274a4a653c5fd05d3928d66fec49e2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86c4b6bbd6-qslwz" podUID="efbfd7b7-201e-404e-a212-64e1a3740554" Aug 19 08:16:28.228383 kubelet[2755]: E0819 08:16:28.228260 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54bd4ea2592a9cbb883228bcc2e4cab51047f440d6ea1bf428c9eeb7285684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.229032 containerd[1576]: time="2025-08-19T08:16:28.227728210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-fhjwh,Uid:c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54bd4ea2592a9cbb883228bcc2e4cab51047f440d6ea1bf428c9eeb7285684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.229195 kubelet[2755]: E0819 08:16:28.228452 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54bd4ea2592a9cbb883228bcc2e4cab51047f440d6ea1bf428c9eeb7285684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b5f88759-fhjwh" Aug 19 08:16:28.229195 kubelet[2755]: E0819 08:16:28.228625 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b54bd4ea2592a9cbb883228bcc2e4cab51047f440d6ea1bf428c9eeb7285684\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b5f88759-fhjwh" Aug 19 08:16:28.229620 kubelet[2755]: E0819 08:16:28.228839 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b5f88759-fhjwh_calico-apiserver(c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b5f88759-fhjwh_calico-apiserver(c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b54bd4ea2592a9cbb883228bcc2e4cab51047f440d6ea1bf428c9eeb7285684\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b5f88759-fhjwh" podUID="c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a" Aug 19 08:16:28.351656 systemd[1]: Created slice kubepods-besteffort-poda4a2ee0c_c5e9_4e55_a62c_f137f85a33a0.slice - libcontainer container kubepods-besteffort-poda4a2ee0c_c5e9_4e55_a62c_f137f85a33a0.slice. Aug 19 08:16:28.355349 containerd[1576]: time="2025-08-19T08:16:28.355302793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fl6vv,Uid:a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:28.426835 containerd[1576]: time="2025-08-19T08:16:28.426755032Z" level=error msg="Failed to destroy network for sandbox \"c8a4016ec2999e50bbe813805485ce154c73b337886380064c749814f2a777f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.428916 containerd[1576]: time="2025-08-19T08:16:28.428732097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fl6vv,Uid:a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a4016ec2999e50bbe813805485ce154c73b337886380064c749814f2a777f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.429311 kubelet[2755]: E0819 08:16:28.429239 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a4016ec2999e50bbe813805485ce154c73b337886380064c749814f2a777f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 19 08:16:28.429429 kubelet[2755]: E0819 08:16:28.429342 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a4016ec2999e50bbe813805485ce154c73b337886380064c749814f2a777f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:28.429429 kubelet[2755]: E0819 08:16:28.429391 2755 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8a4016ec2999e50bbe813805485ce154c73b337886380064c749814f2a777f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fl6vv" Aug 19 08:16:28.429577 kubelet[2755]: E0819 08:16:28.429514 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fl6vv_calico-system(a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fl6vv_calico-system(a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8a4016ec2999e50bbe813805485ce154c73b337886380064c749814f2a777f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fl6vv" podUID="a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0" Aug 19 08:16:28.479325 kubelet[2755]: I0819 08:16:28.478997 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:16:28.572974 containerd[1576]: time="2025-08-19T08:16:28.572913264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 19 08:16:28.684477 systemd[1]: run-netns-cni\x2df5b7ec1f\x2d6dfa\x2d2de5\x2d32ba\x2de557d4d8ae47.mount: Deactivated successfully. Aug 19 08:16:28.684656 systemd[1]: run-netns-cni\x2df039f934\x2ddd6f\x2d0102\x2df19a\x2d12859491881a.mount: Deactivated successfully. Aug 19 08:16:35.383863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149421326.mount: Deactivated successfully. Aug 19 08:16:35.427877 containerd[1576]: time="2025-08-19T08:16:35.427805930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:35.429194 containerd[1576]: time="2025-08-19T08:16:35.429118412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 19 08:16:35.431016 containerd[1576]: time="2025-08-19T08:16:35.430919664Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:35.433913 containerd[1576]: time="2025-08-19T08:16:35.433863447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:35.434878 containerd[1576]: time="2025-08-19T08:16:35.434669789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.861692739s" Aug 19 08:16:35.434878 containerd[1576]: time="2025-08-19T08:16:35.434720249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 19 08:16:35.461433 containerd[1576]: time="2025-08-19T08:16:35.461379857Z" level=info msg="CreateContainer within sandbox \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 19 08:16:35.478007 containerd[1576]: time="2025-08-19T08:16:35.477908049Z" level=info msg="Container f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:35.493423 containerd[1576]: time="2025-08-19T08:16:35.493347856Z" level=info msg="CreateContainer within sandbox \"76d0bca6e8410dda2efc18a0af31132bb4d751f1aa86903fb2d16a911aaf4682\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\"" Aug 19 08:16:35.494914 containerd[1576]: time="2025-08-19T08:16:35.494054482Z" level=info msg="StartContainer for \"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\"" Aug 19 08:16:35.497284 containerd[1576]: time="2025-08-19T08:16:35.497221877Z" level=info msg="connecting to shim f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c" address="unix:///run/containerd/s/553ba68f8907c16b92d5edf979f05d2e7a24350de45ae8ee107d114b6318d404" protocol=ttrpc version=3 Aug 19 08:16:35.530060 systemd[1]: Started cri-containerd-f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c.scope - libcontainer container f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c. Aug 19 08:16:35.602143 containerd[1576]: time="2025-08-19T08:16:35.602027950Z" level=info msg="StartContainer for \"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\" returns successfully" Aug 19 08:16:35.725536 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 19 08:16:35.725728 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 19 08:16:35.908792 kubelet[2755]: I0819 08:16:35.907794 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-backend-key-pair\") pod \"efbfd7b7-201e-404e-a212-64e1a3740554\" (UID: \"efbfd7b7-201e-404e-a212-64e1a3740554\") " Aug 19 08:16:35.908792 kubelet[2755]: I0819 08:16:35.907867 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-ca-bundle\") pod \"efbfd7b7-201e-404e-a212-64e1a3740554\" (UID: \"efbfd7b7-201e-404e-a212-64e1a3740554\") " Aug 19 08:16:35.908792 kubelet[2755]: I0819 08:16:35.907925 2755 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b54fw\" (UniqueName: \"kubernetes.io/projected/efbfd7b7-201e-404e-a212-64e1a3740554-kube-api-access-b54fw\") pod \"efbfd7b7-201e-404e-a212-64e1a3740554\" (UID: \"efbfd7b7-201e-404e-a212-64e1a3740554\") " Aug 19 08:16:35.910528 kubelet[2755]: I0819 08:16:35.910462 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "efbfd7b7-201e-404e-a212-64e1a3740554" (UID: "efbfd7b7-201e-404e-a212-64e1a3740554"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 08:16:35.917690 kubelet[2755]: I0819 08:16:35.917584 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbfd7b7-201e-404e-a212-64e1a3740554-kube-api-access-b54fw" (OuterVolumeSpecName: "kube-api-access-b54fw") pod "efbfd7b7-201e-404e-a212-64e1a3740554" (UID: "efbfd7b7-201e-404e-a212-64e1a3740554"). InnerVolumeSpecName "kube-api-access-b54fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:16:35.921005 kubelet[2755]: I0819 08:16:35.920949 2755 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "efbfd7b7-201e-404e-a212-64e1a3740554" (UID: "efbfd7b7-201e-404e-a212-64e1a3740554"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 19 08:16:36.009179 kubelet[2755]: I0819 08:16:36.009113 2755 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b54fw\" (UniqueName: \"kubernetes.io/projected/efbfd7b7-201e-404e-a212-64e1a3740554-kube-api-access-b54fw\") on node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" DevicePath \"\"" Aug 19 08:16:36.009375 kubelet[2755]: I0819 08:16:36.009196 2755 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-backend-key-pair\") on node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" DevicePath \"\"" Aug 19 08:16:36.009375 kubelet[2755]: I0819 08:16:36.009216 2755 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efbfd7b7-201e-404e-a212-64e1a3740554-whisker-ca-bundle\") on node \"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal\" DevicePath \"\"" Aug 19 08:16:36.351174 systemd[1]: Removed slice kubepods-besteffort-podefbfd7b7_201e_404e_a212_64e1a3740554.slice - libcontainer container kubepods-besteffort-podefbfd7b7_201e_404e_a212_64e1a3740554.slice. Aug 19 08:16:36.382185 systemd[1]: var-lib-kubelet-pods-efbfd7b7\x2d201e\x2d404e\x2da212\x2d64e1a3740554-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db54fw.mount: Deactivated successfully. Aug 19 08:16:36.382361 systemd[1]: var-lib-kubelet-pods-efbfd7b7\x2d201e\x2d404e\x2da212\x2d64e1a3740554-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 19 08:16:36.631026 kubelet[2755]: I0819 08:16:36.630574 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8jtxh" podStartSLOduration=2.583707447 podStartE2EDuration="19.630530535s" podCreationTimestamp="2025-08-19 08:16:17 +0000 UTC" firstStartedPulling="2025-08-19 08:16:18.389172603 +0000 UTC m=+26.261020544" lastFinishedPulling="2025-08-19 08:16:35.435995678 +0000 UTC m=+43.307843632" observedRunningTime="2025-08-19 08:16:36.627374941 +0000 UTC m=+44.499222942" watchObservedRunningTime="2025-08-19 08:16:36.630530535 +0000 UTC m=+44.502378502" Aug 19 08:16:36.702798 systemd[1]: Created slice kubepods-besteffort-pod6c14e16f_66e9_4cca_8b3f_c6f793d138df.slice - libcontainer container kubepods-besteffort-pod6c14e16f_66e9_4cca_8b3f_c6f793d138df.slice. Aug 19 08:16:36.715309 kubelet[2755]: I0819 08:16:36.715242 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqcx\" (UniqueName: \"kubernetes.io/projected/6c14e16f-66e9-4cca-8b3f-c6f793d138df-kube-api-access-gzqcx\") pod \"whisker-6c4b8cc7bb-thjn9\" (UID: \"6c14e16f-66e9-4cca-8b3f-c6f793d138df\") " pod="calico-system/whisker-6c4b8cc7bb-thjn9" Aug 19 08:16:36.715483 kubelet[2755]: I0819 08:16:36.715325 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c14e16f-66e9-4cca-8b3f-c6f793d138df-whisker-backend-key-pair\") pod \"whisker-6c4b8cc7bb-thjn9\" (UID: \"6c14e16f-66e9-4cca-8b3f-c6f793d138df\") " pod="calico-system/whisker-6c4b8cc7bb-thjn9" Aug 19 08:16:36.715483 kubelet[2755]: I0819 08:16:36.715356 2755 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c14e16f-66e9-4cca-8b3f-c6f793d138df-whisker-ca-bundle\") pod \"whisker-6c4b8cc7bb-thjn9\" (UID: \"6c14e16f-66e9-4cca-8b3f-c6f793d138df\") " pod="calico-system/whisker-6c4b8cc7bb-thjn9" Aug 19 08:16:37.010304 containerd[1576]: time="2025-08-19T08:16:37.010082979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4b8cc7bb-thjn9,Uid:6c14e16f-66e9-4cca-8b3f-c6f793d138df,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:37.162009 systemd-networkd[1448]: cali73411d92cb6: Link UP Aug 19 08:16:37.163660 systemd-networkd[1448]: cali73411d92cb6: Gained carrier Aug 19 08:16:37.185998 containerd[1576]: 2025-08-19 08:16:37.049 [INFO][3791] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 19 08:16:37.185998 containerd[1576]: 2025-08-19 08:16:37.063 [INFO][3791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0 whisker-6c4b8cc7bb- calico-system 6c14e16f-66e9-4cca-8b3f-c6f793d138df 877 0 2025-08-19 08:16:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c4b8cc7bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal whisker-6c4b8cc7bb-thjn9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali73411d92cb6 [] [] }} ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-" Aug 19 08:16:37.185998 containerd[1576]: 2025-08-19 08:16:37.063 [INFO][3791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.185998 containerd[1576]: 2025-08-19 08:16:37.098 [INFO][3802] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" HandleID="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.098 [INFO][3802] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" HandleID="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"whisker-6c4b8cc7bb-thjn9", "timestamp":"2025-08-19 08:16:37.098403484 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.098 [INFO][3802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.098 [INFO][3802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.099 [INFO][3802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.110 [INFO][3802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.116 [INFO][3802] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.122 [INFO][3802] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.186370 containerd[1576]: 2025-08-19 08:16:37.125 [INFO][3802] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.128 [INFO][3802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.128 [INFO][3802] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.130 [INFO][3802] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221 Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.137 [INFO][3802] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.146 [INFO][3802] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.65/26] block=192.168.43.64/26 handle="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.146 [INFO][3802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.65/26] handle="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.146 [INFO][3802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:37.187269 containerd[1576]: 2025-08-19 08:16:37.146 [INFO][3802] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.65/26] IPv6=[] ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" HandleID="k8s-pod-network.0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.187641 containerd[1576]: 2025-08-19 08:16:37.150 [INFO][3791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0", GenerateName:"whisker-6c4b8cc7bb-", Namespace:"calico-system", SelfLink:"", UID:"6c14e16f-66e9-4cca-8b3f-c6f793d138df", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c4b8cc7bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-6c4b8cc7bb-thjn9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73411d92cb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:37.187829 containerd[1576]: 2025-08-19 08:16:37.150 [INFO][3791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.65/32] ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.187829 containerd[1576]: 2025-08-19 08:16:37.151 [INFO][3791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73411d92cb6 ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.187829 containerd[1576]: 2025-08-19 08:16:37.164 [INFO][3791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.188143 containerd[1576]: 2025-08-19 08:16:37.165 [INFO][3791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0", GenerateName:"whisker-6c4b8cc7bb-", Namespace:"calico-system", SelfLink:"", UID:"6c14e16f-66e9-4cca-8b3f-c6f793d138df", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c4b8cc7bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221", Pod:"whisker-6c4b8cc7bb-thjn9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73411d92cb6", MAC:"d2:f4:32:ff:9a:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:37.188272 containerd[1576]: 2025-08-19 08:16:37.182 [INFO][3791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" Namespace="calico-system" Pod="whisker-6c4b8cc7bb-thjn9" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-whisker--6c4b8cc7bb--thjn9-eth0" Aug 19 08:16:37.232781 containerd[1576]: time="2025-08-19T08:16:37.232372858Z" level=info msg="connecting to shim 0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221" address="unix:///run/containerd/s/47f10966ae7521088a8451805eb60cba22fc1af9c0e817ee9cc42537adbe8239" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:37.280903 systemd[1]: Started cri-containerd-0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221.scope - libcontainer container 0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221. Aug 19 08:16:37.432214 containerd[1576]: time="2025-08-19T08:16:37.432159436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4b8cc7bb-thjn9,Uid:6c14e16f-66e9-4cca-8b3f-c6f793d138df,Namespace:calico-system,Attempt:0,} returns sandbox id \"0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221\"" Aug 19 08:16:37.437037 containerd[1576]: time="2025-08-19T08:16:37.436986821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 19 08:16:38.298626 systemd-networkd[1448]: vxlan.calico: Link UP Aug 19 08:16:38.299010 systemd-networkd[1448]: vxlan.calico: Gained carrier Aug 19 08:16:38.355231 kubelet[2755]: I0819 08:16:38.355149 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbfd7b7-201e-404e-a212-64e1a3740554" path="/var/lib/kubelet/pods/efbfd7b7-201e-404e-a212-64e1a3740554/volumes" Aug 19 08:16:38.580777 containerd[1576]: time="2025-08-19T08:16:38.580177086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:38.582956 containerd[1576]: time="2025-08-19T08:16:38.582903070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 19 08:16:38.584562 containerd[1576]: time="2025-08-19T08:16:38.584512146Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:38.589594 containerd[1576]: time="2025-08-19T08:16:38.589535218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:38.592063 containerd[1576]: time="2025-08-19T08:16:38.592008914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.154964183s" Aug 19 08:16:38.592168 containerd[1576]: time="2025-08-19T08:16:38.592064049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 19 08:16:38.596989 containerd[1576]: time="2025-08-19T08:16:38.596911685Z" level=info msg="CreateContainer within sandbox \"0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 19 08:16:38.610833 containerd[1576]: time="2025-08-19T08:16:38.610780734Z" level=info msg="Container b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:38.636675 containerd[1576]: time="2025-08-19T08:16:38.636604469Z" level=info msg="CreateContainer within sandbox \"0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0\"" Aug 19 08:16:38.639527 containerd[1576]: time="2025-08-19T08:16:38.639387526Z" level=info msg="StartContainer for \"b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0\"" Aug 19 08:16:38.641761 containerd[1576]: time="2025-08-19T08:16:38.641702695Z" level=info msg="connecting to shim b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0" address="unix:///run/containerd/s/47f10966ae7521088a8451805eb60cba22fc1af9c0e817ee9cc42537adbe8239" protocol=ttrpc version=3 Aug 19 08:16:38.692013 systemd[1]: Started cri-containerd-b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0.scope - libcontainer container b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0. Aug 19 08:16:38.790010 containerd[1576]: time="2025-08-19T08:16:38.789927052Z" level=info msg="StartContainer for \"b233f944f66398ee7fc276ea5b6ea95f748caa88c20e98c281c83ad3bfe033c0\" returns successfully" Aug 19 08:16:38.793794 containerd[1576]: time="2025-08-19T08:16:38.792568818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 19 08:16:38.923555 kubelet[2755]: I0819 08:16:38.922373 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:16:39.065720 containerd[1576]: time="2025-08-19T08:16:39.065635031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\" id:\"f1f3b6de7740d19d70257e46207aedb70df71938e0c0524467a9ea1a1dec3e62\" pid:4101 exit_status:1 exited_at:{seconds:1755591399 nanos:63862713}" Aug 19 08:16:39.164559 systemd-networkd[1448]: cali73411d92cb6: Gained IPv6LL Aug 19 08:16:39.184931 containerd[1576]: time="2025-08-19T08:16:39.184300683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\" id:\"afa748b10d09746ecaf28cc6fdc9291765917393e030a85190891c7b07c59242\" pid:4127 exit_status:1 exited_at:{seconds:1755591399 nanos:183928946}" Aug 19 08:16:39.612016 systemd-networkd[1448]: vxlan.calico: Gained IPv6LL Aug 19 08:16:40.342146 containerd[1576]: time="2025-08-19T08:16:40.342078811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-fhjwh,Uid:c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:16:40.343569 containerd[1576]: time="2025-08-19T08:16:40.343532086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99mdx,Uid:3005f44f-eabf-4645-adf8-5e4625b7b0eb,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:40.345113 containerd[1576]: time="2025-08-19T08:16:40.345076102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-wqrqx,Uid:6f94de95-4727-4949-9336-2ec5fea08c40,Namespace:calico-apiserver,Attempt:0,}" Aug 19 08:16:40.790985 systemd-networkd[1448]: cali48d23270930: Link UP Aug 19 08:16:40.792318 systemd-networkd[1448]: cali48d23270930: Gained carrier Aug 19 08:16:40.838261 containerd[1576]: 2025-08-19 08:16:40.542 [INFO][4150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0 calico-apiserver-84b5f88759- calico-apiserver c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a 804 0 2025-08-19 08:16:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b5f88759 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal calico-apiserver-84b5f88759-fhjwh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48d23270930 [] [] }} ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-" Aug 19 08:16:40.838261 containerd[1576]: 2025-08-19 08:16:40.542 [INFO][4150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.838261 containerd[1576]: 2025-08-19 08:16:40.683 [INFO][4188] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" HandleID="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.683 [INFO][4188] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" HandleID="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ef600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"calico-apiserver-84b5f88759-fhjwh", "timestamp":"2025-08-19 08:16:40.683003821 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.683 [INFO][4188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.683 [INFO][4188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.683 [INFO][4188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.706 [INFO][4188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.714 [INFO][4188] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.724 [INFO][4188] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839126 containerd[1576]: 2025-08-19 08:16:40.729 [INFO][4188] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.735 [INFO][4188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.735 [INFO][4188] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.739 [INFO][4188] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.749 [INFO][4188] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.766 [INFO][4188] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.66/26] block=192.168.43.64/26 handle="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.767 [INFO][4188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.66/26] handle="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.767 [INFO][4188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:40.839675 containerd[1576]: 2025-08-19 08:16:40.768 [INFO][4188] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.66/26] IPv6=[] ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" HandleID="k8s-pod-network.bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.840685 containerd[1576]: 2025-08-19 08:16:40.774 [INFO][4150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0", GenerateName:"calico-apiserver-84b5f88759-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b5f88759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-84b5f88759-fhjwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48d23270930", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:40.841129 containerd[1576]: 2025-08-19 08:16:40.774 [INFO][4150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.66/32] ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.841129 containerd[1576]: 2025-08-19 08:16:40.774 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48d23270930 ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.841129 containerd[1576]: 2025-08-19 08:16:40.791 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.841520 containerd[1576]: 2025-08-19 08:16:40.791 [INFO][4150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0", GenerateName:"calico-apiserver-84b5f88759-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b5f88759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d", Pod:"calico-apiserver-84b5f88759-fhjwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48d23270930", MAC:"6a:a0:69:ca:1e:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:40.841520 containerd[1576]: 2025-08-19 08:16:40.829 [INFO][4150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-fhjwh" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--fhjwh-eth0" Aug 19 08:16:40.916487 containerd[1576]: time="2025-08-19T08:16:40.916404342Z" level=info msg="connecting to shim bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d" address="unix:///run/containerd/s/06bc87f09d178f50b97bbb49282e59b95536f8474d203beda7e851667701f05d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:40.959053 systemd-networkd[1448]: calid06a4ebcabd: Link UP Aug 19 08:16:40.961461 systemd-networkd[1448]: calid06a4ebcabd: Gained carrier Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.534 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0 coredns-7c65d6cfc9- kube-system 3005f44f-eabf-4645-adf8-5e4625b7b0eb 802 0 2025-08-19 08:15:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal coredns-7c65d6cfc9-99mdx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid06a4ebcabd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.536 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.684 [INFO][4186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" HandleID="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.684 [INFO][4186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" HandleID="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000646890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-99mdx", "timestamp":"2025-08-19 08:16:40.684566239 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.685 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.767 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.767 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.813 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.832 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.860 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.866 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.873 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.875 [INFO][4186] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.880 [INFO][4186] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.911 [INFO][4186] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.932 [INFO][4186] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.67/26] block=192.168.43.64/26 handle="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.932 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.67/26] handle="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.932 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:41.021697 containerd[1576]: 2025-08-19 08:16:40.932 [INFO][4186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.67/26] IPv6=[] ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" HandleID="k8s-pod-network.cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.024272 containerd[1576]: 2025-08-19 08:16:40.951 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3005f44f-eabf-4645-adf8-5e4625b7b0eb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-99mdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid06a4ebcabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:41.024272 containerd[1576]: 2025-08-19 08:16:40.953 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.67/32] ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.024272 containerd[1576]: 2025-08-19 08:16:40.953 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid06a4ebcabd ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.024272 containerd[1576]: 2025-08-19 08:16:40.960 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.024272 containerd[1576]: 2025-08-19 08:16:40.965 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3005f44f-eabf-4645-adf8-5e4625b7b0eb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd", Pod:"coredns-7c65d6cfc9-99mdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid06a4ebcabd", MAC:"5a:17:13:25:fc:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:41.024272 containerd[1576]: 2025-08-19 08:16:41.003 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" Namespace="kube-system" Pod="coredns-7c65d6cfc9-99mdx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--99mdx-eth0" Aug 19 08:16:41.064491 systemd[1]: Started cri-containerd-bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d.scope - libcontainer container bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d. Aug 19 08:16:41.143304 systemd-networkd[1448]: califb5ada3b747: Link UP Aug 19 08:16:41.150603 systemd-networkd[1448]: califb5ada3b747: Gained carrier Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.552 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0 calico-apiserver-84b5f88759- calico-apiserver 6f94de95-4727-4949-9336-2ec5fea08c40 803 0 2025-08-19 08:16:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b5f88759 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal calico-apiserver-84b5f88759-wqrqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb5ada3b747 [] [] }} ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.554 [INFO][4156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.691 [INFO][4196] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" HandleID="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.691 [INFO][4196] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" HandleID="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ec0f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"calico-apiserver-84b5f88759-wqrqx", "timestamp":"2025-08-19 08:16:40.691026198 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.691 [INFO][4196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.933 [INFO][4196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.933 [INFO][4196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:40.987 [INFO][4196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.026 [INFO][4196] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.049 [INFO][4196] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.058 [INFO][4196] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.066 [INFO][4196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.069 [INFO][4196] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.074 [INFO][4196] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1 Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.086 [INFO][4196] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.108 [INFO][4196] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.68/26] block=192.168.43.64/26 handle="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.108 [INFO][4196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.68/26] handle="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.108 [INFO][4196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:41.210908 containerd[1576]: 2025-08-19 08:16:41.109 [INFO][4196] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.68/26] IPv6=[] ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" HandleID="k8s-pod-network.689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.213628 containerd[1576]: 2025-08-19 08:16:41.116 [INFO][4156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0", GenerateName:"calico-apiserver-84b5f88759-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f94de95-4727-4949-9336-2ec5fea08c40", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b5f88759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-84b5f88759-wqrqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb5ada3b747", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:41.213628 containerd[1576]: 2025-08-19 08:16:41.116 [INFO][4156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.68/32] ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.213628 containerd[1576]: 2025-08-19 08:16:41.116 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb5ada3b747 ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.213628 containerd[1576]: 2025-08-19 08:16:41.150 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.213628 containerd[1576]: 2025-08-19 08:16:41.169 [INFO][4156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0", GenerateName:"calico-apiserver-84b5f88759-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f94de95-4727-4949-9336-2ec5fea08c40", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b5f88759", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1", Pod:"calico-apiserver-84b5f88759-wqrqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb5ada3b747", MAC:"2e:47:9c:11:02:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:41.213628 containerd[1576]: 2025-08-19 08:16:41.198 [INFO][4156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" Namespace="calico-apiserver" Pod="calico-apiserver-84b5f88759-wqrqx" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--apiserver--84b5f88759--wqrqx-eth0" Aug 19 08:16:41.261097 containerd[1576]: time="2025-08-19T08:16:41.260938478Z" level=info msg="connecting to shim cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd" address="unix:///run/containerd/s/5d49fa46e8ca6a67661b32c145cfad530921d223d010909bdca46868fc33251d" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:41.348672 containerd[1576]: time="2025-08-19T08:16:41.348142538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-r4ttv,Uid:7478dd9c-2565-4ce7-bce4-6692081159c8,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:41.353101 containerd[1576]: time="2025-08-19T08:16:41.353056584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fl6vv,Uid:a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:41.394249 containerd[1576]: time="2025-08-19T08:16:41.394168187Z" level=info msg="connecting to shim 689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1" address="unix:///run/containerd/s/22b3ff237f215093f681355bdd1cfc6ea2abbea6b64b22a4600024466d84c79e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:41.413075 containerd[1576]: time="2025-08-19T08:16:41.413015224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-fhjwh,Uid:c5bc8e1a-e7cd-498c-8d99-f5b7268e3e8a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d\"" Aug 19 08:16:41.446058 systemd[1]: Started cri-containerd-cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd.scope - libcontainer container cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd. Aug 19 08:16:41.547007 systemd[1]: Started cri-containerd-689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1.scope - libcontainer container 689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1. Aug 19 08:16:41.607494 containerd[1576]: time="2025-08-19T08:16:41.606857273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-99mdx,Uid:3005f44f-eabf-4645-adf8-5e4625b7b0eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd\"" Aug 19 08:16:41.620533 containerd[1576]: time="2025-08-19T08:16:41.620267297Z" level=info msg="CreateContainer within sandbox \"cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:16:41.641989 containerd[1576]: time="2025-08-19T08:16:41.641858021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:41.651562 containerd[1576]: time="2025-08-19T08:16:41.651469455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 19 08:16:41.670676 containerd[1576]: time="2025-08-19T08:16:41.670590795Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:41.685102 containerd[1576]: time="2025-08-19T08:16:41.685035949Z" level=info msg="Container f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:41.686468 containerd[1576]: time="2025-08-19T08:16:41.686305122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:41.701186 containerd[1576]: time="2025-08-19T08:16:41.701043829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.906997583s" Aug 19 08:16:41.701558 containerd[1576]: time="2025-08-19T08:16:41.701225010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 19 08:16:41.707429 containerd[1576]: time="2025-08-19T08:16:41.707373798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 08:16:41.712480 containerd[1576]: time="2025-08-19T08:16:41.712381903Z" level=info msg="CreateContainer within sandbox \"0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 19 08:16:41.728835 containerd[1576]: time="2025-08-19T08:16:41.728765944Z" level=info msg="CreateContainer within sandbox \"cbf552797b4ad57327f73f722b7145b57c79dc579a6b68cd0b17e5e5e16535cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2\"" Aug 19 08:16:41.733978 containerd[1576]: time="2025-08-19T08:16:41.733092106Z" level=info msg="StartContainer for \"f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2\"" Aug 19 08:16:41.741572 containerd[1576]: time="2025-08-19T08:16:41.741508245Z" level=info msg="connecting to shim f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2" address="unix:///run/containerd/s/5d49fa46e8ca6a67661b32c145cfad530921d223d010909bdca46868fc33251d" protocol=ttrpc version=3 Aug 19 08:16:41.746364 containerd[1576]: time="2025-08-19T08:16:41.746309935Z" level=info msg="Container fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:41.779443 containerd[1576]: time="2025-08-19T08:16:41.779370200Z" level=info msg="CreateContainer within sandbox \"0327bb66f42b4fd23e8172a27e91235c10b07f1abd189f9707ed0ccbd51ef221\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b\"" Aug 19 08:16:41.784047 containerd[1576]: time="2025-08-19T08:16:41.784001870Z" level=info msg="StartContainer for \"fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b\"" Aug 19 08:16:41.793830 containerd[1576]: time="2025-08-19T08:16:41.793703307Z" level=info msg="connecting to shim fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b" address="unix:///run/containerd/s/47f10966ae7521088a8451805eb60cba22fc1af9c0e817ee9cc42537adbe8239" protocol=ttrpc version=3 Aug 19 08:16:41.797616 containerd[1576]: time="2025-08-19T08:16:41.797563077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b5f88759-wqrqx,Uid:6f94de95-4727-4949-9336-2ec5fea08c40,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1\"" Aug 19 08:16:41.817995 systemd[1]: Started cri-containerd-f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2.scope - libcontainer container f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2. Aug 19 08:16:41.884376 systemd[1]: Started cri-containerd-fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b.scope - libcontainer container fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b. Aug 19 08:16:41.912625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410382890.mount: Deactivated successfully. Aug 19 08:16:42.006055 containerd[1576]: time="2025-08-19T08:16:42.005868454Z" level=info msg="StartContainer for \"f2f29adf3e6363644486b7b39c4f67ddf7d0f0ce6d7396144afe1a03e72f57a2\" returns successfully" Aug 19 08:16:42.015139 systemd-networkd[1448]: califb6217bf422: Link UP Aug 19 08:16:42.017791 systemd-networkd[1448]: califb6217bf422: Gained carrier Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.612 [INFO][4311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0 csi-node-driver- calico-system a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0 697 0 2025-08-19 08:16:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal csi-node-driver-fl6vv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califb6217bf422 [] [] }} ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.612 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.810 [INFO][4393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" HandleID="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.817 [INFO][4393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" HandleID="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"csi-node-driver-fl6vv", "timestamp":"2025-08-19 08:16:41.809697636 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.821 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.821 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.821 [INFO][4393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.855 [INFO][4393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.883 [INFO][4393] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.912 [INFO][4393] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.932 [INFO][4393] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.958 [INFO][4393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.959 [INFO][4393] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.963 [INFO][4393] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849 Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.972 [INFO][4393] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.992 [INFO][4393] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.69/26] block=192.168.43.64/26 handle="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.993 [INFO][4393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.69/26] handle="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.993 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:42.053980 containerd[1576]: 2025-08-19 08:16:41.994 [INFO][4393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.69/26] IPv6=[] ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" HandleID="k8s-pod-network.2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.057557 containerd[1576]: 2025-08-19 08:16:42.007 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-fl6vv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb6217bf422", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:42.057557 containerd[1576]: 2025-08-19 08:16:42.007 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.69/32] ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.057557 containerd[1576]: 2025-08-19 08:16:42.008 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb6217bf422 ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.057557 containerd[1576]: 2025-08-19 08:16:42.017 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.057557 containerd[1576]: 2025-08-19 08:16:42.018 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849", Pod:"csi-node-driver-fl6vv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb6217bf422", MAC:"a2:84:bd:db:75:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:42.057557 containerd[1576]: 2025-08-19 08:16:42.043 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" Namespace="calico-system" Pod="csi-node-driver-fl6vv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-csi--node--driver--fl6vv-eth0" Aug 19 08:16:42.114256 containerd[1576]: time="2025-08-19T08:16:42.114094837Z" level=info msg="connecting to shim 2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849" address="unix:///run/containerd/s/4959a4147ec6aefeff83bfd0f87e0e2bfea274b3565e9842687b59f1e2bc6b84" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:42.197513 systemd-networkd[1448]: cali19ca1c6439c: Link UP Aug 19 08:16:42.206264 systemd[1]: Started cri-containerd-2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849.scope - libcontainer container 2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849. Aug 19 08:16:42.207707 systemd-networkd[1448]: cali19ca1c6439c: Gained carrier Aug 19 08:16:42.236940 systemd-networkd[1448]: califb5ada3b747: Gained IPv6LL Aug 19 08:16:42.237345 systemd-networkd[1448]: cali48d23270930: Gained IPv6LL Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.672 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0 goldmane-58fd7646b9- calico-system 7478dd9c-2565-4ce7-bce4-6692081159c8 801 0 2025-08-19 08:16:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal goldmane-58fd7646b9-r4ttv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali19ca1c6439c [] [] }} ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.675 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.962 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" HandleID="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.962 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" HandleID="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"goldmane-58fd7646b9-r4ttv", "timestamp":"2025-08-19 08:16:41.962208477 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.962 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.993 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:41.995 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.030 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.063 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.076 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.081 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.094 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.097 [INFO][4400] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.103 [INFO][4400] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.130 [INFO][4400] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.176 [INFO][4400] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.70/26] block=192.168.43.64/26 handle="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.176 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.70/26] handle="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.176 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:42.246964 containerd[1576]: 2025-08-19 08:16:42.176 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.70/26] IPv6=[] ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" HandleID="k8s-pod-network.6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.252247 containerd[1576]: 2025-08-19 08:16:42.188 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7478dd9c-2565-4ce7-bce4-6692081159c8", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-58fd7646b9-r4ttv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali19ca1c6439c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:42.252247 containerd[1576]: 2025-08-19 08:16:42.189 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.70/32] ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.252247 containerd[1576]: 2025-08-19 08:16:42.190 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19ca1c6439c ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.252247 containerd[1576]: 2025-08-19 08:16:42.198 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.252247 containerd[1576]: 2025-08-19 08:16:42.207 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"7478dd9c-2565-4ce7-bce4-6692081159c8", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c", Pod:"goldmane-58fd7646b9-r4ttv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali19ca1c6439c", MAC:"f2:59:b3:47:fc:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:42.252247 containerd[1576]: 2025-08-19 08:16:42.239 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" Namespace="calico-system" Pod="goldmane-58fd7646b9-r4ttv" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-goldmane--58fd7646b9--r4ttv-eth0" Aug 19 08:16:42.340323 containerd[1576]: time="2025-08-19T08:16:42.340138583Z" level=info msg="connecting to shim 6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c" address="unix:///run/containerd/s/0311103a4ab164ac5539c9c9b2e56c2be35d26a4d728d0745732484bf2703719" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:42.364476 systemd-networkd[1448]: calid06a4ebcabd: Gained IPv6LL Aug 19 08:16:42.382018 containerd[1576]: time="2025-08-19T08:16:42.381962576Z" level=info msg="StartContainer for \"fd6f121a30b97c0dc35199e85b5daffdf8d671cad32e3e46871aeb8e0257e00b\" returns successfully" Aug 19 08:16:42.468356 systemd[1]: Started cri-containerd-6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c.scope - libcontainer container 6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c. Aug 19 08:16:42.504792 containerd[1576]: time="2025-08-19T08:16:42.501832144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fl6vv,Uid:a4a2ee0c-c5e9-4e55-a62c-f137f85a33a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849\"" Aug 19 08:16:42.753903 kubelet[2755]: I0819 08:16:42.753669 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c4b8cc7bb-thjn9" podStartSLOduration=2.484403346 podStartE2EDuration="6.753639883s" podCreationTimestamp="2025-08-19 08:16:36 +0000 UTC" firstStartedPulling="2025-08-19 08:16:37.43566037 +0000 UTC m=+45.307508312" lastFinishedPulling="2025-08-19 08:16:41.704896884 +0000 UTC m=+49.576744849" observedRunningTime="2025-08-19 08:16:42.753211394 +0000 UTC m=+50.625059376" watchObservedRunningTime="2025-08-19 08:16:42.753639883 +0000 UTC m=+50.625487891" Aug 19 08:16:42.793234 kubelet[2755]: I0819 08:16:42.793137 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-99mdx" podStartSLOduration=43.793103935 podStartE2EDuration="43.793103935s" podCreationTimestamp="2025-08-19 08:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:42.78893948 +0000 UTC m=+50.660787451" watchObservedRunningTime="2025-08-19 08:16:42.793103935 +0000 UTC m=+50.664951902" Aug 19 08:16:42.987369 containerd[1576]: time="2025-08-19T08:16:42.987260905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-r4ttv,Uid:7478dd9c-2565-4ce7-bce4-6692081159c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c\"" Aug 19 08:16:43.324033 systemd-networkd[1448]: califb6217bf422: Gained IPv6LL Aug 19 08:16:43.343815 containerd[1576]: time="2025-08-19T08:16:43.343493033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rz9fq,Uid:2664659c-350b-4ef1-b3a4-6af0b6794bcf,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:43.345006 containerd[1576]: time="2025-08-19T08:16:43.343676999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b956ccb5d-k9v2m,Uid:291128e7-663d-4ca0-8c29-b03f9e74e30e,Namespace:calico-system,Attempt:0,}" Aug 19 08:16:43.644636 systemd-networkd[1448]: cali19ca1c6439c: Gained IPv6LL Aug 19 08:16:43.799580 systemd-networkd[1448]: calie030d687278: Link UP Aug 19 08:16:43.803981 systemd-networkd[1448]: calie030d687278: Gained carrier Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.595 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0 calico-kube-controllers-b956ccb5d- calico-system 291128e7-663d-4ca0-8c29-b03f9e74e30e 800 0 2025-08-19 08:16:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b956ccb5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal calico-kube-controllers-b956ccb5d-k9v2m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie030d687278 [] [] }} ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.595 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.682 [INFO][4627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" HandleID="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.682 [INFO][4627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" HandleID="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"calico-kube-controllers-b956ccb5d-k9v2m", "timestamp":"2025-08-19 08:16:43.682243092 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.683 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.683 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.683 [INFO][4627] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.697 [INFO][4627] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.709 [INFO][4627] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.724 [INFO][4627] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.729 [INFO][4627] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.735 [INFO][4627] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.736 [INFO][4627] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.739 [INFO][4627] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637 Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.754 [INFO][4627] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.771 [INFO][4627] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.71/26] block=192.168.43.64/26 handle="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.772 [INFO][4627] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.71/26] handle="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.772 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:43.852093 containerd[1576]: 2025-08-19 08:16:43.773 [INFO][4627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.71/26] IPv6=[] ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" HandleID="k8s-pod-network.65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.856995 containerd[1576]: 2025-08-19 08:16:43.782 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0", GenerateName:"calico-kube-controllers-b956ccb5d-", Namespace:"calico-system", SelfLink:"", UID:"291128e7-663d-4ca0-8c29-b03f9e74e30e", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b956ccb5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-b956ccb5d-k9v2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie030d687278", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:43.856995 containerd[1576]: 2025-08-19 08:16:43.784 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.71/32] ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.856995 containerd[1576]: 2025-08-19 08:16:43.784 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie030d687278 ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.856995 containerd[1576]: 2025-08-19 08:16:43.808 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.856995 containerd[1576]: 2025-08-19 08:16:43.821 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0", GenerateName:"calico-kube-controllers-b956ccb5d-", Namespace:"calico-system", SelfLink:"", UID:"291128e7-663d-4ca0-8c29-b03f9e74e30e", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 16, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b956ccb5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637", Pod:"calico-kube-controllers-b956ccb5d-k9v2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie030d687278", MAC:"0e:11:4e:83:51:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:43.856995 containerd[1576]: 2025-08-19 08:16:43.844 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" Namespace="calico-system" Pod="calico-kube-controllers-b956ccb5d-k9v2m" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-calico--kube--controllers--b956ccb5d--k9v2m-eth0" Aug 19 08:16:43.956803 systemd-networkd[1448]: caliaa89f834f40: Link UP Aug 19 08:16:43.957680 containerd[1576]: time="2025-08-19T08:16:43.956918050Z" level=info msg="connecting to shim 65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637" address="unix:///run/containerd/s/52cf9594313fae28e5c2978248276b2c0cbeb548cc59a35828022af89b3bab2c" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:43.961144 systemd-networkd[1448]: caliaa89f834f40: Gained carrier Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.584 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0 coredns-7c65d6cfc9- kube-system 2664659c-350b-4ef1-b3a4-6af0b6794bcf 799 0 2025-08-19 08:15:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal coredns-7c65d6cfc9-rz9fq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa89f834f40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.587 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.766 [INFO][4625] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" HandleID="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.767 [INFO][4625] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" HandleID="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000279520), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", "pod":"coredns-7c65d6cfc9-rz9fq", "timestamp":"2025-08-19 08:16:43.766044734 +0000 UTC"}, Hostname:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.768 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.772 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.773 [INFO][4625] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal' Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.798 [INFO][4625] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.814 [INFO][4625] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.831 [INFO][4625] ipam/ipam.go 511: Trying affinity for 192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.837 [INFO][4625] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.853 [INFO][4625] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.853 [INFO][4625] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.868 [INFO][4625] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.883 [INFO][4625] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.905 [INFO][4625] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.43.72/26] block=192.168.43.64/26 handle="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.905 [INFO][4625] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.72/26] handle="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" host="ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal" Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.905 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 19 08:16:44.006063 containerd[1576]: 2025-08-19 08:16:43.909 [INFO][4625] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.72/26] IPv6=[] ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" HandleID="k8s-pod-network.25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Workload="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.009042 containerd[1576]: 2025-08-19 08:16:43.941 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2664659c-350b-4ef1-b3a4-6af0b6794bcf", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7c65d6cfc9-rz9fq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa89f834f40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:44.009042 containerd[1576]: 2025-08-19 08:16:43.943 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.72/32] ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.009042 containerd[1576]: 2025-08-19 08:16:43.944 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa89f834f40 ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.009042 containerd[1576]: 2025-08-19 08:16:43.963 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.009042 containerd[1576]: 2025-08-19 08:16:43.964 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2664659c-350b-4ef1-b3a4-6af0b6794bcf", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.August, 19, 8, 15, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426-0-0-c4b8163e0eecf4b62079.c.flatcar-212911.internal", ContainerID:"25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a", Pod:"coredns-7c65d6cfc9-rz9fq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa89f834f40", MAC:"ca:08:f2:62:33:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 19 08:16:44.009042 containerd[1576]: 2025-08-19 08:16:43.996 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rz9fq" WorkloadEndpoint="ci--4426--0--0--c4b8163e0eecf4b62079.c.flatcar--212911.internal-k8s-coredns--7c65d6cfc9--rz9fq-eth0" Aug 19 08:16:44.131227 containerd[1576]: time="2025-08-19T08:16:44.131154453Z" level=info msg="connecting to shim 25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a" address="unix:///run/containerd/s/a5858324718c415e676b8aba959312fbba84d40e9a982207f5ef1ba286c3e83f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:44.163548 systemd[1]: Started cri-containerd-65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637.scope - libcontainer container 65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637. Aug 19 08:16:44.292160 systemd[1]: Started cri-containerd-25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a.scope - libcontainer container 25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a. Aug 19 08:16:44.475208 containerd[1576]: time="2025-08-19T08:16:44.475099769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rz9fq,Uid:2664659c-350b-4ef1-b3a4-6af0b6794bcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a\"" Aug 19 08:16:44.485983 containerd[1576]: time="2025-08-19T08:16:44.485906557Z" level=info msg="CreateContainer within sandbox \"25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:16:44.514924 containerd[1576]: time="2025-08-19T08:16:44.514846280Z" level=info msg="Container 3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:44.541763 containerd[1576]: time="2025-08-19T08:16:44.541628469Z" level=info msg="CreateContainer within sandbox \"25246c0505ada9eec82f87a3b1d4a2c8966373f19523927e1e02c57a8f0fc67a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f\"" Aug 19 08:16:44.546033 containerd[1576]: time="2025-08-19T08:16:44.545148498Z" level=info msg="StartContainer for \"3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f\"" Aug 19 08:16:44.553886 containerd[1576]: time="2025-08-19T08:16:44.553767376Z" level=info msg="connecting to shim 3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f" address="unix:///run/containerd/s/a5858324718c415e676b8aba959312fbba84d40e9a982207f5ef1ba286c3e83f" protocol=ttrpc version=3 Aug 19 08:16:44.558539 containerd[1576]: time="2025-08-19T08:16:44.558463523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b956ccb5d-k9v2m,Uid:291128e7-663d-4ca0-8c29-b03f9e74e30e,Namespace:calico-system,Attempt:0,} returns sandbox id \"65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637\"" Aug 19 08:16:44.617478 systemd[1]: Started cri-containerd-3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f.scope - libcontainer container 3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f. Aug 19 08:16:44.763487 containerd[1576]: time="2025-08-19T08:16:44.763296752Z" level=info msg="StartContainer for \"3f984651cc179651df5d97ab66d2d4968e3f0e34ba26209e3cd0242f99e1061f\" returns successfully" Aug 19 08:16:45.118166 systemd-networkd[1448]: calie030d687278: Gained IPv6LL Aug 19 08:16:45.801995 kubelet[2755]: I0819 08:16:45.801914 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rz9fq" podStartSLOduration=46.801881855 podStartE2EDuration="46.801881855s" podCreationTimestamp="2025-08-19 08:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:45.772068976 +0000 UTC m=+53.643916941" watchObservedRunningTime="2025-08-19 08:16:45.801881855 +0000 UTC m=+53.673729823" Aug 19 08:16:45.820075 systemd-networkd[1448]: caliaa89f834f40: Gained IPv6LL Aug 19 08:16:45.971324 containerd[1576]: time="2025-08-19T08:16:45.971246128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:45.972701 containerd[1576]: time="2025-08-19T08:16:45.972638827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 19 08:16:45.974420 containerd[1576]: time="2025-08-19T08:16:45.974350632Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:45.977437 containerd[1576]: time="2025-08-19T08:16:45.977363169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:45.978803 containerd[1576]: time="2025-08-19T08:16:45.978593286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.271157177s" Aug 19 08:16:45.978803 containerd[1576]: time="2025-08-19T08:16:45.978643233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 19 08:16:45.981167 containerd[1576]: time="2025-08-19T08:16:45.980858695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 19 08:16:45.982349 containerd[1576]: time="2025-08-19T08:16:45.982314514Z" level=info msg="CreateContainer within sandbox \"bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 08:16:45.994961 containerd[1576]: time="2025-08-19T08:16:45.994911413Z" level=info msg="Container ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:46.016190 containerd[1576]: time="2025-08-19T08:16:46.016038759Z" level=info msg="CreateContainer within sandbox \"bd496fee6c882844d67972173105ff45a0982580580dee33f137ee1e0f9a675d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918\"" Aug 19 08:16:46.017018 containerd[1576]: time="2025-08-19T08:16:46.016982135Z" level=info msg="StartContainer for \"ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918\"" Aug 19 08:16:46.020290 containerd[1576]: time="2025-08-19T08:16:46.020161949Z" level=info msg="connecting to shim ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918" address="unix:///run/containerd/s/06bc87f09d178f50b97bbb49282e59b95536f8474d203beda7e851667701f05d" protocol=ttrpc version=3 Aug 19 08:16:46.068015 systemd[1]: Started cri-containerd-ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918.scope - libcontainer container ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918. Aug 19 08:16:46.146761 containerd[1576]: time="2025-08-19T08:16:46.146694105Z" level=info msg="StartContainer for \"ab72f7776336004d4945e855f66c5f0d31c20612796aa85a606f1ff6e7702918\" returns successfully" Aug 19 08:16:46.183284 containerd[1576]: time="2025-08-19T08:16:46.183219036Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:46.185789 containerd[1576]: time="2025-08-19T08:16:46.185274730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 19 08:16:46.188483 containerd[1576]: time="2025-08-19T08:16:46.188429425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 207.52572ms" Aug 19 08:16:46.188483 containerd[1576]: time="2025-08-19T08:16:46.188482015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 19 08:16:46.191553 containerd[1576]: time="2025-08-19T08:16:46.190969565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 19 08:16:46.192115 containerd[1576]: time="2025-08-19T08:16:46.192074198Z" level=info msg="CreateContainer within sandbox \"689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 19 08:16:46.210075 containerd[1576]: time="2025-08-19T08:16:46.210019599Z" level=info msg="Container 5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:46.230346 containerd[1576]: time="2025-08-19T08:16:46.230293651Z" level=info msg="CreateContainer within sandbox \"689f54f30656ff442ae2be9f4c06c4d6d768c3f2b80861605e90aef8df1aabe1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f\"" Aug 19 08:16:46.232762 containerd[1576]: time="2025-08-19T08:16:46.231242535Z" level=info msg="StartContainer for \"5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f\"" Aug 19 08:16:46.234828 containerd[1576]: time="2025-08-19T08:16:46.234709764Z" level=info msg="connecting to shim 5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f" address="unix:///run/containerd/s/22b3ff237f215093f681355bdd1cfc6ea2abbea6b64b22a4600024466d84c79e" protocol=ttrpc version=3 Aug 19 08:16:46.283320 systemd[1]: Started cri-containerd-5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f.scope - libcontainer container 5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f. Aug 19 08:16:46.380793 containerd[1576]: time="2025-08-19T08:16:46.379982904Z" level=info msg="StartContainer for \"5f21032b9883fee12d72db3df06fa7bcc1f5f19f84fa35cd7aba24719e45df7f\" returns successfully" Aug 19 08:16:46.800615 kubelet[2755]: I0819 08:16:46.800446 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84b5f88759-wqrqx" podStartSLOduration=31.414422155 podStartE2EDuration="35.800420321s" podCreationTimestamp="2025-08-19 08:16:11 +0000 UTC" firstStartedPulling="2025-08-19 08:16:41.80385381 +0000 UTC m=+49.675701762" lastFinishedPulling="2025-08-19 08:16:46.189851973 +0000 UTC m=+54.061699928" observedRunningTime="2025-08-19 08:16:46.779731226 +0000 UTC m=+54.651579192" watchObservedRunningTime="2025-08-19 08:16:46.800420321 +0000 UTC m=+54.672268290" Aug 19 08:16:46.803189 kubelet[2755]: I0819 08:16:46.802142 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84b5f88759-fhjwh" podStartSLOduration=31.240461739 podStartE2EDuration="35.802121298s" podCreationTimestamp="2025-08-19 08:16:11 +0000 UTC" firstStartedPulling="2025-08-19 08:16:41.418337742 +0000 UTC m=+49.290185695" lastFinishedPulling="2025-08-19 08:16:45.979997293 +0000 UTC m=+53.851845254" observedRunningTime="2025-08-19 08:16:46.801719369 +0000 UTC m=+54.673567333" watchObservedRunningTime="2025-08-19 08:16:46.802121298 +0000 UTC m=+54.673969255" Aug 19 08:16:47.481555 containerd[1576]: time="2025-08-19T08:16:47.480633510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:47.484891 containerd[1576]: time="2025-08-19T08:16:47.484800022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 19 08:16:47.488084 containerd[1576]: time="2025-08-19T08:16:47.486465759Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:47.491495 containerd[1576]: time="2025-08-19T08:16:47.491449450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:47.494485 containerd[1576]: time="2025-08-19T08:16:47.494306260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.303295015s" Aug 19 08:16:47.494485 containerd[1576]: time="2025-08-19T08:16:47.494360572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 19 08:16:47.497307 containerd[1576]: time="2025-08-19T08:16:47.497127772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 19 08:16:47.503748 containerd[1576]: time="2025-08-19T08:16:47.503670856Z" level=info msg="CreateContainer within sandbox \"2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 19 08:16:47.536199 containerd[1576]: time="2025-08-19T08:16:47.532923396Z" level=info msg="Container 1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:47.554305 containerd[1576]: time="2025-08-19T08:16:47.554247801Z" level=info msg="CreateContainer within sandbox \"2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31\"" Aug 19 08:16:47.555586 containerd[1576]: time="2025-08-19T08:16:47.555525278Z" level=info msg="StartContainer for \"1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31\"" Aug 19 08:16:47.562873 containerd[1576]: time="2025-08-19T08:16:47.562819882Z" level=info msg="connecting to shim 1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31" address="unix:///run/containerd/s/4959a4147ec6aefeff83bfd0f87e0e2bfea274b3565e9842687b59f1e2bc6b84" protocol=ttrpc version=3 Aug 19 08:16:47.628109 systemd[1]: Started cri-containerd-1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31.scope - libcontainer container 1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31. Aug 19 08:16:47.775400 kubelet[2755]: I0819 08:16:47.775052 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:16:47.775400 kubelet[2755]: I0819 08:16:47.775321 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:16:47.843418 containerd[1576]: time="2025-08-19T08:16:47.843256298Z" level=info msg="StartContainer for \"1916dc8ec7eafd2047abfdc3508c8de96ab6886a39e4b64c547d0d5aa9b4ef31\" returns successfully" Aug 19 08:16:48.607545 ntpd[1496]: Listen normally on 8 vxlan.calico 192.168.43.64:123 Aug 19 08:16:48.608609 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 8 vxlan.calico 192.168.43.64:123 Aug 19 08:16:48.607722 ntpd[1496]: Listen normally on 9 cali73411d92cb6 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 19 08:16:48.612041 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 9 cali73411d92cb6 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 19 08:16:48.612041 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 10 vxlan.calico [fe80::64f4:ddff:fe33:f7e3%5]:123 Aug 19 08:16:48.612041 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 11 cali48d23270930 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 19 08:16:48.611934 ntpd[1496]: Listen normally on 10 vxlan.calico [fe80::64f4:ddff:fe33:f7e3%5]:123 Aug 19 08:16:48.612366 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 12 calid06a4ebcabd [fe80::ecee:eeff:feee:eeee%9]:123 Aug 19 08:16:48.612366 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 13 califb5ada3b747 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 19 08:16:48.612366 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 14 califb6217bf422 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 19 08:16:48.612366 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 15 cali19ca1c6439c [fe80::ecee:eeff:feee:eeee%12]:123 Aug 19 08:16:48.612366 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 16 calie030d687278 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 19 08:16:48.612025 ntpd[1496]: Listen normally on 11 cali48d23270930 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 19 08:16:48.615139 ntpd[1496]: 19 Aug 08:16:48 ntpd[1496]: Listen normally on 17 caliaa89f834f40 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 19 08:16:48.612090 ntpd[1496]: Listen normally on 12 calid06a4ebcabd [fe80::ecee:eeff:feee:eeee%9]:123 Aug 19 08:16:48.612151 ntpd[1496]: Listen normally on 13 califb5ada3b747 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 19 08:16:48.612211 ntpd[1496]: Listen normally on 14 califb6217bf422 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 19 08:16:48.612282 ntpd[1496]: Listen normally on 15 cali19ca1c6439c [fe80::ecee:eeff:feee:eeee%12]:123 Aug 19 08:16:48.612344 ntpd[1496]: Listen normally on 16 calie030d687278 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 19 08:16:48.612416 ntpd[1496]: Listen normally on 17 caliaa89f834f40 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 19 08:16:50.369966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490713437.mount: Deactivated successfully. Aug 19 08:16:50.603344 kubelet[2755]: I0819 08:16:50.602952 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:16:51.227145 containerd[1576]: time="2025-08-19T08:16:51.227046106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:51.228537 containerd[1576]: time="2025-08-19T08:16:51.228470596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 19 08:16:51.229984 containerd[1576]: time="2025-08-19T08:16:51.229935882Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:51.233400 containerd[1576]: time="2025-08-19T08:16:51.233327898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:51.234680 containerd[1576]: time="2025-08-19T08:16:51.234467208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.73698409s" Aug 19 08:16:51.234680 containerd[1576]: time="2025-08-19T08:16:51.234519615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 19 08:16:51.236316 containerd[1576]: time="2025-08-19T08:16:51.236282935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 19 08:16:51.238787 containerd[1576]: time="2025-08-19T08:16:51.238665336Z" level=info msg="CreateContainer within sandbox \"6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 19 08:16:51.254770 containerd[1576]: time="2025-08-19T08:16:51.251950424Z" level=info msg="Container 8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:51.273667 containerd[1576]: time="2025-08-19T08:16:51.273562332Z" level=info msg="CreateContainer within sandbox \"6e08dac8f5795fb7ec87070ff4377a0a4d5ec1d321f8812b188dcbe670ec999c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\"" Aug 19 08:16:51.275148 containerd[1576]: time="2025-08-19T08:16:51.274596935Z" level=info msg="StartContainer for \"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\"" Aug 19 08:16:51.277327 containerd[1576]: time="2025-08-19T08:16:51.277277500Z" level=info msg="connecting to shim 8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e" address="unix:///run/containerd/s/0311103a4ab164ac5539c9c9b2e56c2be35d26a4d728d0745732484bf2703719" protocol=ttrpc version=3 Aug 19 08:16:51.318033 systemd[1]: Started cri-containerd-8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e.scope - libcontainer container 8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e. Aug 19 08:16:51.399345 containerd[1576]: time="2025-08-19T08:16:51.399123449Z" level=info msg="StartContainer for \"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" returns successfully" Aug 19 08:16:51.833934 kubelet[2755]: I0819 08:16:51.833552 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-r4ttv" podStartSLOduration=26.588705746 podStartE2EDuration="34.83352092s" podCreationTimestamp="2025-08-19 08:16:17 +0000 UTC" firstStartedPulling="2025-08-19 08:16:42.991205679 +0000 UTC m=+50.863053630" lastFinishedPulling="2025-08-19 08:16:51.236020841 +0000 UTC m=+59.107868804" observedRunningTime="2025-08-19 08:16:51.830556827 +0000 UTC m=+59.702404793" watchObservedRunningTime="2025-08-19 08:16:51.83352092 +0000 UTC m=+59.705368900" Aug 19 08:16:51.949006 containerd[1576]: time="2025-08-19T08:16:51.948947268Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"0049f828a5d33c4292ab62b62b12cf1dc581595c4e25b14653bbef6049239681\" pid:4976 exit_status:1 exited_at:{seconds:1755591411 nanos:948326872}" Aug 19 08:16:53.029081 containerd[1576]: time="2025-08-19T08:16:53.029020527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"36a6f719f17ea7bc879721543db247813dde20fb20a7e9d8ab52414aa897e91e\" pid:5008 exit_status:1 exited_at:{seconds:1755591413 nanos:27451229}" Aug 19 08:16:54.008368 containerd[1576]: time="2025-08-19T08:16:54.008301424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"3d91c7c4efd68c704cae03e3e2cab6e0c48f3d07f0a87abc7559777eedfec911\" pid:5032 exit_status:1 exited_at:{seconds:1755591414 nanos:7845685}" Aug 19 08:16:54.033240 containerd[1576]: time="2025-08-19T08:16:54.033088850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:54.037244 containerd[1576]: time="2025-08-19T08:16:54.037003379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 19 08:16:54.040772 containerd[1576]: time="2025-08-19T08:16:54.039654849Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:54.047437 containerd[1576]: time="2025-08-19T08:16:54.047360273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:54.049207 containerd[1576]: time="2025-08-19T08:16:54.049142548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.812811123s" Aug 19 08:16:54.049207 containerd[1576]: time="2025-08-19T08:16:54.049207990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 19 08:16:54.052964 containerd[1576]: time="2025-08-19T08:16:54.052050296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 19 08:16:54.073332 containerd[1576]: time="2025-08-19T08:16:54.072113483Z" level=info msg="CreateContainer within sandbox \"65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 19 08:16:54.100766 containerd[1576]: time="2025-08-19T08:16:54.100271465Z" level=info msg="Container e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:54.119393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3442498061.mount: Deactivated successfully. Aug 19 08:16:54.126782 containerd[1576]: time="2025-08-19T08:16:54.126605512Z" level=info msg="CreateContainer within sandbox \"65fa0a58e7f1dd84cc879929a18f8d7963ffece5f3ab6f5e8d2ce51baf850637\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\"" Aug 19 08:16:54.129508 containerd[1576]: time="2025-08-19T08:16:54.128238210Z" level=info msg="StartContainer for \"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\"" Aug 19 08:16:54.131309 containerd[1576]: time="2025-08-19T08:16:54.131256077Z" level=info msg="connecting to shim e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3" address="unix:///run/containerd/s/52cf9594313fae28e5c2978248276b2c0cbeb548cc59a35828022af89b3bab2c" protocol=ttrpc version=3 Aug 19 08:16:54.177209 systemd[1]: Started cri-containerd-e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3.scope - libcontainer container e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3. Aug 19 08:16:54.462982 containerd[1576]: time="2025-08-19T08:16:54.462894637Z" level=info msg="StartContainer for \"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\" returns successfully" Aug 19 08:16:54.841021 kubelet[2755]: I0819 08:16:54.840798 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b956ccb5d-k9v2m" podStartSLOduration=27.364964206 podStartE2EDuration="36.840713419s" podCreationTimestamp="2025-08-19 08:16:18 +0000 UTC" firstStartedPulling="2025-08-19 08:16:44.57583556 +0000 UTC m=+52.447683518" lastFinishedPulling="2025-08-19 08:16:54.051584774 +0000 UTC m=+61.923432731" observedRunningTime="2025-08-19 08:16:54.8390593 +0000 UTC m=+62.710907266" watchObservedRunningTime="2025-08-19 08:16:54.840713419 +0000 UTC m=+62.712561378" Aug 19 08:16:54.897672 containerd[1576]: time="2025-08-19T08:16:54.897597515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\" id:\"9c1775cf16e8474565b8eb2e48090b3da8fb6622cf177d5fff4b1b11fbc2454d\" pid:5100 exited_at:{seconds:1755591414 nanos:895579664}" Aug 19 08:16:55.526374 containerd[1576]: time="2025-08-19T08:16:55.526304923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:55.527530 containerd[1576]: time="2025-08-19T08:16:55.527465059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 19 08:16:55.529386 containerd[1576]: time="2025-08-19T08:16:55.529294142Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:55.532862 containerd[1576]: time="2025-08-19T08:16:55.532814890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:55.534111 containerd[1576]: time="2025-08-19T08:16:55.534005466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.481909571s" Aug 19 08:16:55.534111 containerd[1576]: time="2025-08-19T08:16:55.534061262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 19 08:16:55.537760 containerd[1576]: time="2025-08-19T08:16:55.537685973Z" level=info msg="CreateContainer within sandbox \"2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 19 08:16:55.554783 containerd[1576]: time="2025-08-19T08:16:55.550918507Z" level=info msg="Container 57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:55.565252 containerd[1576]: time="2025-08-19T08:16:55.565192642Z" level=info msg="CreateContainer within sandbox \"2a0e2eb5af10fd140bbe64f9a05268482529ea1c72fb058df035ff6b686a7849\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1\"" Aug 19 08:16:55.566351 containerd[1576]: time="2025-08-19T08:16:55.566252220Z" level=info msg="StartContainer for \"57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1\"" Aug 19 08:16:55.569692 containerd[1576]: time="2025-08-19T08:16:55.569590099Z" level=info msg="connecting to shim 57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1" address="unix:///run/containerd/s/4959a4147ec6aefeff83bfd0f87e0e2bfea274b3565e9842687b59f1e2bc6b84" protocol=ttrpc version=3 Aug 19 08:16:55.605941 systemd[1]: Started cri-containerd-57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1.scope - libcontainer container 57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1. Aug 19 08:16:55.686259 containerd[1576]: time="2025-08-19T08:16:55.686206223Z" level=info msg="StartContainer for \"57d5b6a4916b18c8e4155b755b6c7ac48dd8130389e362e87d4875c7c37467a1\" returns successfully" Aug 19 08:16:56.487615 kubelet[2755]: I0819 08:16:56.487569 2755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 19 08:16:56.487615 kubelet[2755]: I0819 08:16:56.487622 2755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 19 08:16:57.199121 containerd[1576]: time="2025-08-19T08:16:57.199067269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\" id:\"616359840e35f34b60918182e2074f051495327e2249be6682a5c4c0629c4b79\" pid:5159 exited_at:{seconds:1755591417 nanos:198292386}" Aug 19 08:16:57.304192 containerd[1576]: time="2025-08-19T08:16:57.304083316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"c6ccc0f0d36e8c15bb47bdbf6e70f88daf1e1b3b55cf07ddcac1eb872436aa52\" pid:5181 exit_status:1 exited_at:{seconds:1755591417 nanos:303668460}" Aug 19 08:17:02.440118 containerd[1576]: time="2025-08-19T08:17:02.439960072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"1095f0704043c2e3c9e53cf01e8dc8caaae2e9a8383446865265eddaa12ee03b\" pid:5212 exited_at:{seconds:1755591422 nanos:438691470}" Aug 19 08:17:02.571632 systemd[1]: Started sshd@7-10.128.0.35:22-147.75.109.163:57046.service - OpenSSH per-connection server daemon (147.75.109.163:57046). Aug 19 08:17:02.886868 sshd[5226]: Accepted publickey for core from 147.75.109.163 port 57046 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:02.888961 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:02.896461 systemd-logind[1505]: New session 8 of user core. Aug 19 08:17:02.905116 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:17:03.214112 sshd[5229]: Connection closed by 147.75.109.163 port 57046 Aug 19 08:17:03.215966 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:03.231036 systemd[1]: sshd@7-10.128.0.35:22-147.75.109.163:57046.service: Deactivated successfully. Aug 19 08:17:03.236423 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:17:03.240073 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:17:03.243693 systemd-logind[1505]: Removed session 8. Aug 19 08:17:06.053710 containerd[1576]: time="2025-08-19T08:17:06.053651929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\" id:\"d41cca263f2e257177966b91704fab604ccb66f3a4719877d978a9e0ce4b82e8\" pid:5255 exited_at:{seconds:1755591426 nanos:53103386}" Aug 19 08:17:08.268785 systemd[1]: Started sshd@8-10.128.0.35:22-147.75.109.163:42118.service - OpenSSH per-connection server daemon (147.75.109.163:42118). Aug 19 08:17:08.572553 sshd[5265]: Accepted publickey for core from 147.75.109.163 port 42118 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:08.574650 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:08.583822 systemd-logind[1505]: New session 9 of user core. Aug 19 08:17:08.587056 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:17:08.872776 sshd[5268]: Connection closed by 147.75.109.163 port 42118 Aug 19 08:17:08.874131 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:08.880540 systemd[1]: sshd@8-10.128.0.35:22-147.75.109.163:42118.service: Deactivated successfully. Aug 19 08:17:08.884263 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:17:08.886003 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:17:08.888547 systemd-logind[1505]: Removed session 9. Aug 19 08:17:09.024028 containerd[1576]: time="2025-08-19T08:17:09.023977351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\" id:\"1c985873b346e6a7b5a9636484bfdc306e97faf3bf180081525e9fd2ed3450d1\" pid:5291 exited_at:{seconds:1755591429 nanos:23430103}" Aug 19 08:17:09.051690 kubelet[2755]: I0819 08:17:09.051006 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fl6vv" podStartSLOduration=38.023835657 podStartE2EDuration="51.050980651s" podCreationTimestamp="2025-08-19 08:16:18 +0000 UTC" firstStartedPulling="2025-08-19 08:16:42.508422761 +0000 UTC m=+50.380270710" lastFinishedPulling="2025-08-19 08:16:55.535567752 +0000 UTC m=+63.407415704" observedRunningTime="2025-08-19 08:16:55.85155927 +0000 UTC m=+63.723407263" watchObservedRunningTime="2025-08-19 08:17:09.050980651 +0000 UTC m=+76.922828617" Aug 19 08:17:10.651599 kubelet[2755]: I0819 08:17:10.650980 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:17:13.929706 systemd[1]: Started sshd@9-10.128.0.35:22-147.75.109.163:42134.service - OpenSSH per-connection server daemon (147.75.109.163:42134). Aug 19 08:17:14.244847 sshd[5306]: Accepted publickey for core from 147.75.109.163 port 42134 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:14.247195 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:14.255820 systemd-logind[1505]: New session 10 of user core. Aug 19 08:17:14.262038 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:17:14.554893 sshd[5309]: Connection closed by 147.75.109.163 port 42134 Aug 19 08:17:14.555886 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:14.563348 systemd[1]: sshd@9-10.128.0.35:22-147.75.109.163:42134.service: Deactivated successfully. Aug 19 08:17:14.566273 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:17:14.568143 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:17:14.570575 systemd-logind[1505]: Removed session 10. Aug 19 08:17:14.608091 systemd[1]: Started sshd@10-10.128.0.35:22-147.75.109.163:42146.service - OpenSSH per-connection server daemon (147.75.109.163:42146). Aug 19 08:17:14.920393 sshd[5322]: Accepted publickey for core from 147.75.109.163 port 42146 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:14.923839 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:14.934826 systemd-logind[1505]: New session 11 of user core. Aug 19 08:17:14.942198 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:17:15.294818 sshd[5325]: Connection closed by 147.75.109.163 port 42146 Aug 19 08:17:15.296355 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:15.308445 systemd[1]: sshd@10-10.128.0.35:22-147.75.109.163:42146.service: Deactivated successfully. Aug 19 08:17:15.312616 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:17:15.316202 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:17:15.320052 systemd-logind[1505]: Removed session 11. Aug 19 08:17:15.351034 systemd[1]: Started sshd@11-10.128.0.35:22-147.75.109.163:42150.service - OpenSSH per-connection server daemon (147.75.109.163:42150). Aug 19 08:17:15.663286 sshd[5335]: Accepted publickey for core from 147.75.109.163 port 42150 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:15.665052 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:15.673378 systemd-logind[1505]: New session 12 of user core. Aug 19 08:17:15.678059 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:17:16.005248 sshd[5339]: Connection closed by 147.75.109.163 port 42150 Aug 19 08:17:16.006582 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:16.013284 systemd[1]: sshd@11-10.128.0.35:22-147.75.109.163:42150.service: Deactivated successfully. Aug 19 08:17:16.017591 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:17:16.020927 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:17:16.023403 systemd-logind[1505]: Removed session 12. Aug 19 08:17:21.072554 systemd[1]: Started sshd@12-10.128.0.35:22-147.75.109.163:59866.service - OpenSSH per-connection server daemon (147.75.109.163:59866). Aug 19 08:17:21.374166 sshd[5362]: Accepted publickey for core from 147.75.109.163 port 59866 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:21.376119 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:21.383314 systemd-logind[1505]: New session 13 of user core. Aug 19 08:17:21.388956 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:17:21.674089 sshd[5365]: Connection closed by 147.75.109.163 port 59866 Aug 19 08:17:21.675083 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:21.681679 systemd[1]: sshd@12-10.128.0.35:22-147.75.109.163:59866.service: Deactivated successfully. Aug 19 08:17:21.684968 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:17:21.686614 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:17:21.689082 systemd-logind[1505]: Removed session 13. Aug 19 08:17:26.728077 systemd[1]: Started sshd@13-10.128.0.35:22-147.75.109.163:59872.service - OpenSSH per-connection server daemon (147.75.109.163:59872). Aug 19 08:17:27.034497 sshd[5379]: Accepted publickey for core from 147.75.109.163 port 59872 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:27.036618 sshd-session[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:27.044318 systemd-logind[1505]: New session 14 of user core. Aug 19 08:17:27.052039 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:17:27.218810 containerd[1576]: time="2025-08-19T08:17:27.218725879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\" id:\"f00462839ca8363b3a8e2feebc2e71f199fc4f28182b4b4c1c1e89ac7b51a08d\" pid:5395 exited_at:{seconds:1755591447 nanos:218126634}" Aug 19 08:17:27.362029 containerd[1576]: time="2025-08-19T08:17:27.361947081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"41fa919f69889a1e14f104666b1970ef5ffa745e1b1e249f66fad4afc88c9fc3\" pid:5424 exited_at:{seconds:1755591447 nanos:361370352}" Aug 19 08:17:27.395770 sshd[5382]: Connection closed by 147.75.109.163 port 59872 Aug 19 08:17:27.397891 sshd-session[5379]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:27.410442 systemd[1]: sshd@13-10.128.0.35:22-147.75.109.163:59872.service: Deactivated successfully. Aug 19 08:17:27.417358 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:17:27.420216 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:17:27.424561 systemd-logind[1505]: Removed session 14. Aug 19 08:17:32.459534 systemd[1]: Started sshd@14-10.128.0.35:22-147.75.109.163:59608.service - OpenSSH per-connection server daemon (147.75.109.163:59608). Aug 19 08:17:32.791699 sshd[5443]: Accepted publickey for core from 147.75.109.163 port 59608 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:32.794470 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:32.803104 systemd-logind[1505]: New session 15 of user core. Aug 19 08:17:32.811191 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:17:33.101932 sshd[5446]: Connection closed by 147.75.109.163 port 59608 Aug 19 08:17:33.103284 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:33.111858 systemd[1]: sshd@14-10.128.0.35:22-147.75.109.163:59608.service: Deactivated successfully. Aug 19 08:17:33.117657 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:17:33.120288 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:17:33.123825 systemd-logind[1505]: Removed session 15. Aug 19 08:17:33.161389 systemd[1]: Started sshd@15-10.128.0.35:22-147.75.109.163:59618.service - OpenSSH per-connection server daemon (147.75.109.163:59618). Aug 19 08:17:33.473927 sshd[5458]: Accepted publickey for core from 147.75.109.163 port 59618 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:33.476342 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:33.483966 systemd-logind[1505]: New session 16 of user core. Aug 19 08:17:33.491103 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:17:33.843365 sshd[5461]: Connection closed by 147.75.109.163 port 59618 Aug 19 08:17:33.845076 sshd-session[5458]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:33.856238 systemd[1]: sshd@15-10.128.0.35:22-147.75.109.163:59618.service: Deactivated successfully. Aug 19 08:17:33.859693 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:17:33.863624 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:17:33.865556 systemd-logind[1505]: Removed session 16. Aug 19 08:17:33.900563 systemd[1]: Started sshd@16-10.128.0.35:22-147.75.109.163:59628.service - OpenSSH per-connection server daemon (147.75.109.163:59628). Aug 19 08:17:34.217599 sshd[5471]: Accepted publickey for core from 147.75.109.163 port 59628 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:34.220195 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:34.228811 systemd-logind[1505]: New session 17 of user core. Aug 19 08:17:34.240100 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:17:36.961784 sshd[5474]: Connection closed by 147.75.109.163 port 59628 Aug 19 08:17:36.961369 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:36.974215 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:17:36.974633 systemd[1]: sshd@16-10.128.0.35:22-147.75.109.163:59628.service: Deactivated successfully. Aug 19 08:17:36.981512 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:17:36.982399 systemd[1]: session-17.scope: Consumed 819ms CPU time, 75.6M memory peak. Aug 19 08:17:36.988177 systemd-logind[1505]: Removed session 17. Aug 19 08:17:37.017684 systemd[1]: Started sshd@17-10.128.0.35:22-147.75.109.163:59630.service - OpenSSH per-connection server daemon (147.75.109.163:59630). Aug 19 08:17:37.329395 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 59630 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:37.331490 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:37.339964 systemd-logind[1505]: New session 18 of user core. Aug 19 08:17:37.347040 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:17:37.847711 sshd[5494]: Connection closed by 147.75.109.163 port 59630 Aug 19 08:17:37.848941 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:37.863930 systemd[1]: sshd@17-10.128.0.35:22-147.75.109.163:59630.service: Deactivated successfully. Aug 19 08:17:37.869558 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:17:37.874566 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:17:37.877056 systemd-logind[1505]: Removed session 18. Aug 19 08:17:37.910987 systemd[1]: Started sshd@18-10.128.0.35:22-147.75.109.163:59646.service - OpenSSH per-connection server daemon (147.75.109.163:59646). Aug 19 08:17:38.232350 sshd[5504]: Accepted publickey for core from 147.75.109.163 port 59646 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:38.234898 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:38.245364 systemd-logind[1505]: New session 19 of user core. Aug 19 08:17:38.253162 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:17:38.541773 sshd[5507]: Connection closed by 147.75.109.163 port 59646 Aug 19 08:17:38.543298 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:38.550421 systemd[1]: sshd@18-10.128.0.35:22-147.75.109.163:59646.service: Deactivated successfully. Aug 19 08:17:38.554360 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:17:38.556603 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:17:38.559056 systemd-logind[1505]: Removed session 19. Aug 19 08:17:39.046515 containerd[1576]: time="2025-08-19T08:17:39.046341731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9e6a2c69da5e95f17573c724a4b94dd920dcef84b8fee8469046c450330e41c\" id:\"0b4c1507c27833a4aaf3c0f8776fafc841e8f0e6eae187ea05c9eafc6553df3d\" pid:5529 exited_at:{seconds:1755591459 nanos:45833327}" Aug 19 08:17:43.606618 systemd[1]: Started sshd@19-10.128.0.35:22-147.75.109.163:36916.service - OpenSSH per-connection server daemon (147.75.109.163:36916). Aug 19 08:17:43.956801 sshd[5544]: Accepted publickey for core from 147.75.109.163 port 36916 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:43.960943 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:43.975003 systemd-logind[1505]: New session 20 of user core. Aug 19 08:17:43.985054 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:17:44.299565 sshd[5550]: Connection closed by 147.75.109.163 port 36916 Aug 19 08:17:44.300829 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:44.308060 systemd[1]: sshd@19-10.128.0.35:22-147.75.109.163:36916.service: Deactivated successfully. Aug 19 08:17:44.311104 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:17:44.313059 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:17:44.315546 systemd-logind[1505]: Removed session 20. Aug 19 08:17:49.361488 systemd[1]: Started sshd@20-10.128.0.35:22-147.75.109.163:56100.service - OpenSSH per-connection server daemon (147.75.109.163:56100). Aug 19 08:17:49.673019 sshd[5563]: Accepted publickey for core from 147.75.109.163 port 56100 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:49.675142 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:49.683866 systemd-logind[1505]: New session 21 of user core. Aug 19 08:17:49.693080 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:17:49.987914 sshd[5566]: Connection closed by 147.75.109.163 port 56100 Aug 19 08:17:49.988992 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:49.997319 systemd[1]: sshd@20-10.128.0.35:22-147.75.109.163:56100.service: Deactivated successfully. Aug 19 08:17:50.002102 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:17:50.004328 systemd-logind[1505]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:17:50.007966 systemd-logind[1505]: Removed session 21. Aug 19 08:17:55.050210 systemd[1]: Started sshd@21-10.128.0.35:22-147.75.109.163:56112.service - OpenSSH per-connection server daemon (147.75.109.163:56112). Aug 19 08:17:55.385120 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 56112 ssh2: RSA SHA256:BRCUc+mm7tdWIqJkPl5NJ3CYtA7zjedqF7Ez9uVgoJ4 Aug 19 08:17:55.388866 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:55.406870 systemd-logind[1505]: New session 22 of user core. Aug 19 08:17:55.415059 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:17:55.784692 sshd[5584]: Connection closed by 147.75.109.163 port 56112 Aug 19 08:17:55.786127 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:55.798515 systemd[1]: sshd@21-10.128.0.35:22-147.75.109.163:56112.service: Deactivated successfully. Aug 19 08:17:55.808299 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:17:55.814005 systemd-logind[1505]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:17:55.819025 systemd-logind[1505]: Removed session 22. Aug 19 08:17:57.279223 containerd[1576]: time="2025-08-19T08:17:57.279149107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0a599bbcd802574bccb121dd7e50fe74ab496ecbd09e3da90508bb12a1ef1a3\" id:\"e72d853b50d4a031901bbabc04e5babe6165d922c77407b036ef7ee8171b0400\" pid:5608 exited_at:{seconds:1755591477 nanos:277696183}" Aug 19 08:17:57.419369 containerd[1576]: time="2025-08-19T08:17:57.418894046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea3183c20a9928cf51ac92fe0db64fc1aecaf8cbfa7260fd9b96a7b4251082e\" id:\"839be23079b3ce85aaa98b80fbf74f462301c7409c71f65f976472803c32ec97\" pid:5626 exited_at:{seconds:1755591477 nanos:417016013}"